Sample records for sample average approximation

  1. [Method for concentration determination of mineral-oil fog in the air of workplace].

    PubMed

    Xu, Min; Zhang, Yu-Zeng; Liu, Shi-Feng

    2008-05-01

    To study the method of concentration determination of mineral-oil fog in the air of workplace. Four filter films such as synthetic fabric filter film, beta glass fiber filter film, chronic filter paper and microporous film were used in this study. Two kinds of dust samplers were used to collect the sample, one sampling at fast flow rate in a short time and the other sampling at slow flow rate with long duration. Subsequently, the filter membrane was weighed with electronic analytical balance. According to sampling efficiency and incremental size, the adsorbent ability of four different filter membranes was compared. When the flow rate was between 10 approximately 20 L/min and the sampling time was between 10 approximately 15 min, the average sampling efficiency of synthetic fabric filter film was 95.61% and the increased weight ranged from 0.87 to 2.60 mg. When the flow rate was between 10 approximately 20 L/min and sampling time was between 10 approximately 15 min, the average sampling efficiency of beta glass fiber filter film was 97.57% and the increased weight was 0.75 approximately 2.47 mg. When the flow rate was between 5 approximately 10 L/min and the sampling time between 10 approximately 20 min, the average sampling efficiency of chronic filter paper and microporous film was 48.94% and 63.15%, respectively and the increased weight was 0.75 approximately 2.15 mg and 0.23 approximately 0.85 mg, respectively. When the flow rate was 3.5 L/min and the sampling time was between 100 approximately 166 min, the average sampling efficiency of filter film were 94.44% and 93.45%, respectively and the average increased weight was 1.28 mg for beta glass fiber filter film and 0.78 mg for beta glass fiber filter film and synthetic fabric synthetic fabric filter film. The average sampling efficiency of chronic filter paper and microporous film were 37.65% and 88.21%, respectively. The average increased weight was 4.30 mg and 1.23 mg, respectively. Sampling with synthetic fabric filter film and beta glass fiber filter film is credible, accurate, simple and feasible for determination of the concentration of mineral-oil fog in workplaces.

  2. Optimal Budget Allocation for Sample Average Approximation

    DTIC Science & Technology

    2011-06-01

    an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample

  3. Preliminary Public Health, Environmental Risk, and Data Requirements Assessment for the Herbicide Orange Storage Site at Johnston Island

    DTIC Science & Technology

    1991-10-01

    an average concentration of 0.8 ppb. 2,4-D in surface soil ranges from 2.5 ppb to 281,330 ppb with an average of 49,986 ppb. 2,4,5-T in surface soil...ranges from 53 ppb to 237,155 ppb, with an average of 48,914 ppb. Approximately 25% of the site was sampled for subsurface TCDD in the 3-7 inch layer of...subsurface soil. Values ranged from 0.02 ppb to 207 ppb, with an average reading of 15 ppb. Approximately 2% of the site was sampled for subsurface

  4. X-ray microanalytical surveys of minor element concentrations in unsectioned biological samples

    NASA Astrophysics Data System (ADS)

    Schofield, R. M. S.; Lefevre, H. W.; Overley, J. C.; Macdonald, J. D.

    1988-03-01

    Approximate concentration maps of small unsectioned biological samples are made using the pixel by pixel ratio of PIXE images to areal density images. Areal density images are derived from scanning transmission ion microscopy (STIM) proton energy-loss images. Corrections for X-ray production cross section variations, X-ray attenuation, and depth averaging are approximated or ignored. Estimates of the magnitude of the resulting error are made. Approximate calcium concentrations within the head of a fruit fly are reported. Concentrations in the retinula cell region of the eye average about 1 mg/g dry weight. Concentrations of zinc in the mandible of several ant species average about 40 mg/g. Zinc concentrations in the stomachs of these ants are at least 1 mg/g.

  5. Rhenium Complexes and Clusters Supported on c-Al2O3: Effects of Rhenium Oxidation State and Rhenium Cluster Size on Catalytic Activity for n-butane Hydrogenolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobo Lapidus, R.; Gates, B

    2009-01-01

    Supported metals prepared from H{sub 3}Re{sub 3}(CO){sub 12} on {gamma}-Al{sub 2}O{sub 3} were treated under conditions that led to various rhenium structures on the support and were tested as catalysts for n-butane conversion in the presence of H{sub 2} in a flow reactor at 533 K and 1 atm. After use, two samples were characterized by X-ray absorption edge positions of approximately 5.6 eV (relative to rhenium metal), indicating that the rhenium was cationic and essentially in the same average oxidation state in each. But the Re-Re coordination numbers found by extended X-ray absorption fine structure spectroscopy (2.2 and 5.1)more » show that the clusters in the two samples were significantly different in average nuclearity despite their indistinguishable rhenium oxidation states. Spectra of a third sample after catalysis indicate approximately Re{sub 3} clusters, on average, and an edge position of 4.5 eV. Thus, two samples contained clusters approximated as Re{sub 3} (on the basis of the Re-Re coordination number), on average, with different average rhenium oxidation states. The data allow resolution of the effects of rhenium oxidation state and cluster size, both of which affect the catalytic activity; larger clusters and a greater degree of reduction lead to increased activity.« less

  6. Exponential approximation for daily average solar heating or photolysis. [of stratospheric ozone layer

    NASA Technical Reports Server (NTRS)

    Cogley, A. C.; Borucki, W. J.

    1976-01-01

    When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.

  7. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  8. Direct measurement of fast transients by using boot-strapped waveform averaging

    NASA Astrophysics Data System (ADS)

    Olsson, Mattias; Edman, Fredrik; Karki, Khadga Jung

    2018-03-01

    An approximation to coherent sampling, also known as boot-strapped waveform averaging, is presented. The method uses digital cavities to determine the condition for coherent sampling. It can be used to increase the effective sampling rate of a repetitive signal and the signal to noise ratio simultaneously. The method is demonstrated by using it to directly measure the fluorescence lifetime from Rhodamine 6G by digitizing the signal from a fast avalanche photodiode. The obtained lifetime of 4.0 ns is in agreement with the known values.

  9. Synthesis of SiO2-coated ZnMnFe2O4 nanospheres with improved magnetic properties.

    PubMed

    Wang, Jun; Zhang, Kai; Zhu, Yuejin

    2005-05-01

    A core-shell structured composite, SiO2 coated ZnMnFe2O4 spinel ferrite nanoparticles (average diameter of approximately 80 nm), was prepared by hydrolysis of tetraethyl orthosilicate (TEOS) in the presence of ZnMnFe2O4 nanoparticles (average diameter of approximately 10 nm) synthesized by a hydrothermal method. The obtained samples were characterized by X-ray diffraction (XRD), transmission electron microscopy (TEM), and field emission scanning electron microscopy (FESEM). The magnetic measurements were carried out on a vibrating sample magnetometer (VSM), and the measurement results indicate that the core-shell samples possess better magnetic properties at room temperature, compared with paramagnetic colloids with a magnetic core by a coprecipitation method. These core-shell nanospherical particles with self-assembly under additional magnetic fields could have potential application in biomedical systems.

  10. Agarose and Polyacrylamide Gel Electrophoresis Methods for Molecular Mass Analysis of 5–500 kDa Hyaluronan

    PubMed Central

    Bhilocha, Shardul; Amin, Ripal; Pandya, Monika; Yuan, Han; Tank, Mihir; LoBello, Jaclyn; Shytuhina, Anastasia; Wang, Wenlan; Wisniewski, Hans-Georg; de la Motte, Carol; Cowman, Mary K.

    2011-01-01

    Agarose and polyacrylamide gel electrophoresis systems for the molecular mass-dependent separation of hyaluronan (HA) in the size range of approximately 5–500 kDa have been investigated. For agarose-based systems, the suitability of different agarose types, agarose concentrations, and buffers systems were determined. Using chemoenzymatically synthesized HA standards of low polydispersity, the molecular mass range was determined for each gel composition, over which the relationship between HA mobility and logarithm of the molecular mass was linear. Excellent linear calibration was obtained for HA molecular mass as low as approximately 9 kDa in agarose gels. For higher resolution separation, and for extension to molecular masses as low as approximately 5 kDa, gradient polyacrylamide gels were superior. Densitometric scanning of stained gels allowed analysis of the range of molecular masses present in a sample, and calculation of weight-average and number-average values. The methods were validated for polydisperse HA samples with viscosity-average molecular masses of 112, 59, 37, and 22 kDa, at sample loads of 0.5 µg (for polyacrylamide) to 2.5 µg (for agarose). Use of the methods for electrophoretic mobility shift assays was demonstrated for binding of the HA-binding region of aggrecan (recombinant human aggrecan G1-IGD-G2 domains) to a 150 kDa HA standard. PMID:21684248

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Fuke, E-mail: wufuke@mail.hust.edu.cn; Tian, Tianhai, E-mail: tianhai.tian@sci.monash.edu.au; Rawlings, James B., E-mail: james.rawlings@wisc.edu

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in themore » work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.« less

  12. One-sided truncated sequential t-test: application to natural resource sampling

    Treesearch

    Gary W. Fowler; William G. O' Regan

    1974-01-01

    A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...

  13. Photographic mark-recapture analysis of local dynamics within an open population of dolphins.

    PubMed

    Fearnbach, H; Durban, J; Parsons, K; Claridge, D

    2012-07-01

    Identifying demographic changes is important for understanding population dynamics. However, this requires long-term studies of definable populations of distinct individuals, which can be particularly challenging when studying mobile cetaceans in the marine environment. We collected photo-identification data from 19 years (1992-2010) to assess the dynamics of a population of bottlenose dolphins (Tursiops truncatus) restricted to the shallow (<7 m) waters of Little Bahama Bank, northern Bahamas. This population was known to range beyond our study area, so we adopted a Bayesian mixture modeling approach to mark-recapture to identify clusters of individuals that used the area to different extents, and we specifically estimated trends in survival, recruitment, and abundance of a "resident" population with high probabilities of identification. There was a high probability (p= 0.97) of a long-term decrease in the size of this resident population from a maximum of 47 dolphins (95% highest posterior density intervals, HPDI = 29-61) in 1996 to a minimum of just 24 dolphins (95% HPDI = 14-37) in 2009, a decline of 49% (95% HPDI = approximately 5% to approximately 75%). This was driven by low per capita recruitment (average approximately 0.02) that could not compensate for relatively low apparent survival rates (average approximately 0.94). Notably, there was a significant increase in apparent mortality (approximately 5 apparent mortalities vs. approximately 2 on average) in 1999 when two intense hurricanes passed over the study area, with a high probability (p = 0.83) of a drop below the average survival probability (approximately 0.91 in 1999; approximately 0.94, on average). As such, our mark-recapture approach enabled us to make useful inference about local dynamics within an open population of bottlenose dolphins; this should be applicable to other studies challenged by sampling highly mobile individuals with heterogeneous space use.

  14. Average probability that a "cold hit" in a DNA database search results in an erroneous attribution.

    PubMed

    Song, Yun S; Patil, Anand; Murphy, Erin E; Slatkin, Montgomery

    2009-01-01

    We consider a hypothetical series of cases in which the DNA profile of a crime-scene sample is found to match a known profile in a DNA database (i.e., a "cold hit"), resulting in the identification of a suspect based only on genetic evidence. We show that the average probability that there is another person in the population whose profile matches the crime-scene sample but who is not in the database is approximately 2(N - d)p(A), where N is the number of individuals in the population, d is the number of profiles in the database, and p(A) is the average match probability (AMP) for the population. The AMP is estimated by computing the average of the probabilities that two individuals in the population have the same profile. We show further that if a priori each individual in the population is equally likely to have left the crime-scene sample, then the average probability that the database search attributes the crime-scene sample to a wrong person is (N - d)p(A).

  15. The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.

    ERIC Educational Resources Information Center

    Macpherson, David A.; Even, William E.

    The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…

  16. Spectrophotometry of 2 complete samples of flat radio spectrum quasars

    NASA Technical Reports Server (NTRS)

    Wampler, E. J.; Gaskell, C. M.; Burke, W. L.; Baldwin, J. A.

    1983-01-01

    Spectrophotometry of two complete samples of flat-spectrum radio quasars show that for these objects there is a strong correlation between the equivalent width of the CIV wavelength 1550 emission line and the luminosity of the underlying continuum. Assuming Friedmann cosmologies, the scatter in this correlation is a minimum for q (sub o) is approximately 1. Alternatively, luminosity evolution can be invoked to give compact distributions for q (sub o) is approximately 0 models. A sample of Seyfert galaxies observed with IUE shows that despite some dispersion the average equivalent width of CIV wavelength 1550 in Seyfert galaxies is independent of the underlying continuum luminosity. New redshifts for 4 quasars are given.

  17. X-Ray Properties of Lyman Break Galaxies in the Hubble Deep Field North Region

    NASA Technical Reports Server (NTRS)

    Nandra, K.; Mushotzky, R. F.; Arnaud, K.; Steidel, C. C.; Adelberger, K. L.; Gardner, J. P.; Teplitz, H. I.; Windhorst, R. A.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We describe the X-ray properties of a large sample of z approximately 3 Lyman Break Galaxies (LBGs) in the region of the Hubble Deep Field North, derived from the 1 Ms public Chandra observation. Of our sample of 148 LBGs, four are detected individually. This immediately gives a measure of the bright AGN (active galactic nuclei) fraction in these galaxies of approximately 3 per cent, which is in agreement with that derived from the UV (ultraviolet) spectra. The X-ray color of the detected sources indicates that they are probably moderately obscured. Stacking of the remainder shows a significant detection (6 sigma) with an average luminosity of 3.5 x 10(exp 41) erg/s per galaxy in the rest frame 2-10 keV band. We have also studied a comparison sample of 95 z approximately 1 "Balmer Break" galaxies. Eight of these are detected directly, with at least two clear AGN based on their high X-ray luminosity and very hard X-ray spectra respectively. The remainder are of relatively low luminosity (< 10(exp 42) erg/s, and the X-rays could arise from either AGN or rapid star-formation. The X-ray colors and evidence from other wavebands favor the latter interpretation. Excluding the clear AGN, we deduce a mean X-ray luminosity of 6.6 x 10(exp 40) erg/s, a factor approximately 5 lower than the LBGs. The average ratio of the UV and X-ray luminosities of these star forming galaxies L(sub UV)/L (sub X), however, is approximately the same at z = 1 as it is at z = 3. This scaling implies that the X-ray emission follows the current star formation rate, as measured by the UV luminosity. We use our results to constrain the star formation rate at z approximately 3 from an X-ray perspective. Assuming the locally established correlation between X-ray and far-IR (infrared) luminosity, the average inferred star formation rate in each Lyman break galaxy is found to be approximately 60 solar mass/yr, in excellent agreement with the extinction-corrected UV estimates. This provides an external check on the UV estimates of the star formation rates, and on the use of X-ray luminosities to infer these rates in rapidly starforming galaxies at high redshift.

  18. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  19. Hybrid selection for sequencing pathogen genomes from clinical samples

    PubMed Central

    2011-01-01

    We have adapted a solution hybrid selection protocol to enrich pathogen DNA in clinical samples dominated by human genetic material. Using mock mixtures of human and Plasmodium falciparum malaria parasite DNA as well as clinical samples from infected patients, we demonstrate an average of approximately 40-fold enrichment of parasite DNA after hybrid selection. This approach will enable efficient genome sequencing of pathogens from clinical samples, as well as sequencing of endosymbiotic organisms such as Wolbachia that live inside diverse metazoan phyla. PMID:21835008

  20. Broiler carcass contamination with Campylobacter from feces during defeathering.

    PubMed

    Berrang, M E; Buhr, R J; Cason, J A; Dickens, J A

    2001-12-01

    Three sets of experiments were conducted to explore the increase in recovery of Campylobacter from broiler carcasses after defeathering. In the first set of experiments, live broilers obtained from a commercial processor were transported to a pilot plant, and breast skin was sampled by a sponge wipe method before and after defeathering. One of 120 broiler breast skin samples was positive for Campylobacter before defeathering, and 95 of 120 were positive after defeathering. In the second set of experiments, Campylobacter-free flocks were identified, subjected to feed withdrawal, and transported to the pilot plant. Carcasses were intracloacally inoculated with Campylobacter (10(7) CFU) just prior to entering the scald tank. Breast skin sponge samples were negative for Campylobacter before carcasses entered the picker (0 of 120 samples). After defeathering, 69 of 120 samples were positive for Campylobacter, with an average of log10 2.7 CFU per sample (approximately 30 cm2). The third set of experiments was conducted using Campylobacter-positive broilers obtained at a commercial processing plant and transported live to the pilot plant. Just prior to scalding, the cloacae were plugged with tampons and sutured shut on half of the carcasses. Plugged carcasses were scalded, and breast skin samples taken before and after defeathering were compared with those collected from control broilers from the same flock. Prior to defeathering, 1 of 120 breast skin sponge samples were positive for the control carcasses, and 0 of 120 were positive for the plugged carcasses. After passing through the picker, 120 of 120 control carcasses had positive breast skin sponge samples, with an average of log10 4.2 CFU per sample (approximately 30 cm2). Only 13 of 120 plugged carcasses had detectable numbers of Campylobacter on the breast skin sponge, with an average of log10 2.5 CFU per sample. These data indicate that an increase in the recovery of Campylobacter after defeathering can be related to the escape of contaminated feces from the cloaca during defeathering.

  1. Asbestos exposure from the overhaul of a Pratt & Whitney R2800 engine.

    PubMed

    Mlynarek, S P; Van Orden, D R

    2012-11-01

    This study assessed the asbestos exposures of airplane piston engine mechanics while performing overhaul work on a Pratt & Whitney R2800 radial engine, with tools and practices in use since the time these engines were manufactured. Approximately 40% of the bulk samples collected during this test were found to contain chrysotile. Air samples were collected during the overhaul and were analyzed by phase contrast microscopy (PCM) and transmission electron microscopy (TEM). The average worker exposure during disassembly was 0.0272f/ml (PCM) and ranged from 0.0013 to 0.1240f/ml (PCM) during an average sample collection time of 188min. The average worker exposure during reassembly was 0.0198f/ml (PCM) and ranged from 0.0055 to 0.0913f/ml (PCM) during an average sample collection time of 222min. Only one worker sample (during reassembly) was found to contain asbestos at a concentration of 0.0012f/ml (PCME). Similar results should be found in other aircraft piston engines that use metal clad and non-friable asbestos gaskets, which are the current standard in aircraft piston engines. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Aerosol and Inorganic Gaseous Iodine at Appledore Island, Maine During Summers 2004, 2005 and 2006

    NASA Astrophysics Data System (ADS)

    Pszenny, A.; Cotter, K.; Deegan, B.; Fischer, E.; Griffin, R.; Johnson, D.; Keene, W.; Maben, J.; Seidel, T.; Smith, A.; Ziemba, L.

    2006-12-01

    Iodine chemistry may affect the ozone budget in the marine atmosphere and has been hypothesized to play an important role in aerosol nucleation and/or growth in surface air, particularly in coastal regions where marine macrophytes are a prolific source of organoiodine gases. Total iodine was determined by neutron activation analysis in: 1) daytime and nighttime samples of bulk and size segregated aerosols (Iaer) and of inorganic gaseous iodine (Iig) collected on LiOH-impregnated filters during summer 2004, 2) daytime and nighttime samples of PM2.5 aerosol samples collected during summers 2005 and 2006, and 3) 1- to 3- hour duration PM2.5 samples collected over four diel cycles during summer 2006 at Appledore Island (AI), ME, approximately 10 km offshore from Portsmouth, NH. A parallel set of PM2.5 samples was collected in 2005 at Durham, NH, approximately 20 km inland from Portsmouth. The 2004 data indicated that the inorganic I pool at AI is mainly gaseous (average 88%) and that Iaer is mainly (average 88%) associated with sub-μm diameter particles. Concentrations in both phases were similar to those observed by others in the 1970s over the tropical and subtropical North Atlantic. Daytime Iaer and Iig concentrations both tended to be greater than respective nighttime concentrations. Iaer concentrations in 2005 and 2006 were significantly higher than in 2004 and displayed pronounced day/night differences. The diel cycle studies in 2006 confirmed that Iaer was low at night (average 3.3 ng m-3) and high (average 8.3 ng m-3) during the day. The timing of the daily maximum varied over the four days sampled. These data imply active multiphase photochemical processing of iodine in the vicinity of the AI site. Iaer concentrations at the Durham site inland were significantly lower than at AI and showed no significant day/night difference.

  3. Levonorgestrel release rates over 5 years with the Liletta® 52-mg intrauterine system.

    PubMed

    Creinin, Mitchell D; Jansen, Rolf; Starr, Robert M; Gobburu, Joga; Gopalakrishnan, Mathangi; Olariu, Andrea

    2016-10-01

    To understand the potential duration of action for Liletta®, we conducted this study to estimate levonorgestrel (LNG) release rates over approximately 5½years of product use. Clinical sites in the U.S. Phase 3 study of Liletta collected the LNG intrauterine systems (IUSs) from women who discontinued the study. We randomly selected samples within 90-day intervals after discontinuation of IUS use through 900days (approximately 2.5years) and 180-day intervals for the remaining duration through 5.4years (1980days) to evaluate residual LNG content. We also performed an initial LNG content analysis using 10 randomly selected samples from a single lot. We calculated the average ex vivo release rate using the residual LNG content over the duration of the analysis. We analyzed 64 samples within 90-day intervals (range 6-10 samples per interval) through 900days and 36 samples within 180-day intervals (6 samples per interval) for the remaining duration. The initial content analysis averaged 52.0±1.8mg. We calculated an average initial release rate of 19.5mcg/day that decreased to 17.0, 14.8, 12.9, 11.3 and 9.8mcg/day after 1, 2, 3, 4 and 5years, respectively. The 5-year average release rate is 14.7mcg/day. The estimated initial LNG release rate and gradual decay of the estimated release rate are consistent with the target design and function of the product. The calculated LNG content and release rate curves support the continued evaluation of Liletta as a contraceptive for 5 or more years of use. Liletta LNG content and release rates are comparable to published data for another LNG 52-mg IUS. The release rate at 5years is more than double the published release rate at 3years with an LNG 13.5-mg IUS, suggesting continued efficacy of Liletta beyond 5years. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Unveiling the Secrets of Metallicity and Massive Star Formation Using DLAs Along Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Cucchiara, A.; Fumagalli, M.; Rafelski, M.; Kocevski, D.; Prochaska, J. X.; Cooke, R. J.; Becker, G. D.

    2015-01-01

    We present the largest, publicly available, sample of Damped Lyman-alpha systems (DLAs) along Swift discovered Gamma-ray Bursts (GRB) line of sights in order to investigate the environmental properties of long GRB hosts in the z = 1.8 - 6 redshift range. Compared with the most recent quasar DLAs sample (QSO-DLA), our analysis shows that GRB-DLAs probe a more metal enriched environment at z approximately greater than 3, up to [X/H] approximately -0.5. In the z = 2 - 3 redshift range, despite the large number of lower limits, there are hints that the two populations may be more similar (only at 90% significance level) than at higher redshifts. Also, at high-z, the GRB-DLA average metallicity seems to decline at a shallower rate than the QSO-DLAs: GRB-DLA hosts may be polluted with metals at least as far as approximately 2 kpc from the GRB explosion site, probably due to previous star-formation episodes and/or supernovae explosions. This shallow metallicity trend, extended now up to z approximately 5, confirms previous results that GRB hosts are star-forming and have, on average, higher metallicity than the general QSO-DLA population. Finally, our host metallicity measurements are broadly consistent with the predictions derived from the hypothesis of two channels of GRB progenitors, one of which is mildly affected by a metallicity bias, although more data are needed to constrain the models at z approximately greater than 4.

  5. Compton scattering study of electron momentum distribution in lithium fluoride using 662 keV gamma radiations

    NASA Astrophysics Data System (ADS)

    Vijayakumar, R.; Shivaramu; Ramamurthy, N.; Ford, M. J.

    2008-12-01

    Here we report the first ever 137Cs Compton spectroscopy study of lithium fluoride. The spherical average Compton profiles of lithium fluoride are deduced from Compton scattering measurements on poly crystalline sample at gamma ray energy of 662 keV. To compare the experimental data, we have computed the spherical average Compton profiles using self-consistent Hartree-Fock wave functions employed on linear combination of atomic orbital (HF-LCAO) approximation. The directional Compton profiles and their anisotropic effects are also calculated using the same HF-LCAO approximation. The experimental spherical average profiles are found to be in good agreement with the corresponding HF-LCAO calculations and in qualitative agreement with Hartree-Fock free atom values. The present experimental isotropic and calculated directional profiles are also compared with the available experimental isotropic and directional Compton profiles using 59.54 and 159 keV γ-rays.

  6. Air sampling results in relation to extent of fungal colonization of building materials in some water-damaged buildings.

    PubMed

    Miller, J D; Haisley, P D; Reinhardt, J H

    2000-09-01

    We studied the extent and nature of fungal colonization of building materials in 58 naturally ventilated apartments that had suffered various kinds of water damage in relation to air sampling done before the physical inspections. The results of air samples from each apartment were compared by rank order of species with pooled data from outdoor air. Approximately 90% of the apartments that had significant amounts of fungi in wall cavities were identified by air sampling. There was no difference in the average fungal colony forming unit values per m3 between the 15 apartments with the most fungal contamination and the 15 with the least. In contrast, the prevalence of samples with fungal species significantly different than the pooled outdoor air between the more contaminated versus the less contaminated apartments was approximately 10-fold. We provide information on methods to document fungal contamination in buildings.

  7. Surface-hopping dynamics and decoherence with quantum equilibrium structure.

    PubMed

    Grunwald, Robbie; Kim, Hyojoon; Kapral, Raymond

    2008-04-28

    In open quantum systems, decoherence occurs through interaction of a quantum subsystem with its environment. The computation of expectation values requires a knowledge of the quantum dynamics of operators and sampling from initial states of the density matrix describing the subsystem and bath. We consider situations where the quantum evolution can be approximated by quantum-classical Liouville dynamics and examine the circumstances under which the evolution can be reduced to surface-hopping dynamics, where the evolution consists of trajectory segments exclusively evolving on single adiabatic surfaces, with probabilistic hops between these surfaces. The justification for the reduction depends on the validity of a Markovian approximation on a bath averaged memory kernel that accounts for quantum coherence in the system. We show that such a reduction is often possible when initial sampling is from either the quantum or classical bath initial distributions. If the average is taken only over the quantum dispersion that broadens the classical distribution, then such a reduction is not always possible.

  8. Plate tectonics and continental basaltic geochemistry throughout Earth history

    NASA Astrophysics Data System (ADS)

    Keller, Brenhin; Schoene, Blair

    2018-01-01

    Basaltic magmas constitute the primary mass flux from Earth's mantle to its crust, carrying information about the conditions of mantle melting through which they were generated. As such, changes in the average basaltic geochemistry through time reflect changes in underlying parameters such as mantle potential temperature and the geodynamic setting of mantle melting. However, sampling bias, preservation bias, and geological heterogeneity complicate the calculation of representative average compositions. Here we use weighted bootstrap resampling to minimize sampling bias over the heterogeneous rock record and obtain maximally representative average basaltic compositions through time. Over the approximately 4 Ga of the continental rock record, the average composition of preserved continental basalts has evolved along a generally continuous trajectory, with decreasing compatible element concentrations and increasing incompatible element concentrations, punctuated by a comparatively rapid transition in some variables such as La/Yb ratios and Zr, Nb, and Ti abundances approximately 2.5 Ga ago. Geochemical modeling of mantle melting systematics and trace element partitioning suggests that these observations can be explained by discontinuous changes in the mineralogy of mantle partial melting driven by a gradual decrease in mantle potential temperature, without appealing to any change in tectonic process. This interpretation is supported by the geochemical record of slab fluid input to continental basalts, which indicates no long-term change in the global proportion of arc versus non-arc basaltic magmatism at any time in the preserved rock record.

  9. Improvement in dielectric and mechanical performance of CaCu3.1Ti4O12.1 by addition of Al2O3 nanoparticles

    PubMed Central

    2012-01-01

    The properties of CaCu3.1Ti4O12.1 [CC3.1TO] ceramics with the addition of Al2O3 nanoparticles, prepared via a solid-state reaction technique, were investigated. The nanoparticle additive was found to inhibit grain growth with the average grain size decreasing from approximately 7.5 μm for CC3.1TO to approximately 2.0 μm for the unmodified samples, while the Knoop hardness value was found to improve with a maximum value of 9.8 GPa for the 1 vol.% Al2O3 sample. A very high dielectric constant > 60,000 with a low loss tangent (approximately 0.09) was observed for the 0.5 vol.% Al2O3 sample at 1 kHz and at room temperature. These data suggest that nanocomposites have a great potential for dielectric applications. PMID:22221316

  10. Approximation algorithms for the min-power symmetric connectivity problem

    NASA Astrophysics Data System (ADS)

    Plotnikov, Roman; Erzin, Adil; Mladenovic, Nenad

    2016-10-01

    We consider the NP-hard problem of synthesis of optimal spanning communication subgraph in a given arbitrary simple edge-weighted graph. This problem occurs in the wireless networks while minimizing the total transmission power consumptions. We propose several new heuristics based on the variable neighborhood search metaheuristic for the approximation solution of the problem. We have performed a numerical experiment where all proposed algorithms have been executed on the randomly generated test samples. For these instances, on average, our algorithms outperform the previously known heuristics.

  11. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  12. Landscapes to riverscapes: bridging the gap between research and conservation of stream fishes

    USGS Publications Warehouse

    Fausch, Kurt D.; Torgersen, Christian E.; Baxter, Colden V.; Li, Hiram W.

    2002-01-01

    Woodcock (Philohela minor), earthworms, and soil samples were collected from January-March 1965, from fields in southeastern Louisiana approximately 3 years after discontinuance of areal treatments with heptachlor in this region. Heptachlor epoxide residues in woodcock averaged 0.42 ppm (dry weight), conspicuously lower than in 1961 and 1962. Residues of DDE in woodcock averaged 3.62 pprn, higher than in birds taken in the same area in 1961-62. Earthworms and soils contained traces of several organochlorine pesticides.

  13. New approximate orientation averaging of the water molecule interacting with the thermal neutron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markovic, M.I.; Minic, D.M.; Rakic, A.D.

    1992-02-01

    This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-knowmore » K-N approximate orientation averaging method.« less

  14. Social Status, Perceived Social Reputations, and Perceived Dyadic Relationships in Early Adolescence

    ERIC Educational Resources Information Center

    Badaly, Daryaneh; Schwartz, David; Gorman, Andrea Hopmeyer

    2012-01-01

    This investigation examined social acceptance and popularity as correlates of perceived social reputations and perceived dyadic relationships in a cross-sectional sample of 418 6th and 7th grade students (approximate average age of 12 years). We assessed early adolescents' social status using peer nominations and measured their perceptions of…

  15. Analysis of iodide and iodate in Lake Mead, Nevada using a headspace derivatization gas chromatography-mass spectrometry.

    PubMed

    Dorman, James W; Steinberg, Spencer M

    2010-02-01

    We report here a derivatization headspace method for the analysis of inorganic iodine in water. Samples from Lake Mead, the Las Vegas Wash, and from Las Vegas tap water were examined. Lake Mead and the Las Vegas Wash contained a mixture of both iodide and iodate. The average concentration of total inorganic iodine (TII) for Lake Mead was approximately 90 nM with an iodide-to-iodate ratio of approximately 1. The TII concentration (approximately 160 nM) and the ratio of iodide to iodate were higher for the Las Vegas Wash (approximately 2). The TII concentration for tap water was close to that of Lake Mead (approximately 90 nM); however, tap water contained no detectable iodide as a result of ozonation and chlorine treatment which converts all of the iodide to iodate.

  16. HST images of very compact blue galaxies at z approximately 0.2

    NASA Technical Reports Server (NTRS)

    Koo, David C.; Bershady, Matthew A.; Wirth, Gregory D.; Stanford, S. Adam; Majewski, Steven R.

    1994-01-01

    We present the results of Hubble Space Telescope (HST) Wide-Field Camera (WFC) imaging of seven very compact, very blue galaxies with B less than or equal to 21 and redshifts z approximately 0.1 to 0.35. Based on deconvolved images, we estimate typical half-light diameters of approximately 0.65 sec, corresponding to approximately 1.4 h(exp -1) kpc at redshifts z approximately 0.2. The average rest frame surface brightness within this diameter is mu(sub v) approximately 20.5 mag arcsec(exp -2), approximately 1 mag brighter than that of typical late-type blue galaxies. Ground-based spectra show strong, narrow emission lines indicating high ionization; their very blue colors suggest recent bursts of star-formation; their typical luminosities are approximately 4 times fainter than that of field galaxies. These characteristics suggest H II galaxies as likely local counterparts of our sample, though our most luminous targets appear to be unusually compact for their luminosities.

  17. Conservative Tests under Satisficing Models of Publication Bias.

    PubMed

    McCrary, Justin; Christensen, Garret; Fanelli, Daniele

    2016-01-01

    Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%-rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs.

  18. Conservative Tests under Satisficing Models of Publication Bias

    PubMed Central

    McCrary, Justin; Christensen, Garret; Fanelli, Daniele

    2016-01-01

    Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs. PMID:26901834

  19. Investigating the effect of sputtering conditions on the physical properties of aluminum thin film and the resulting alumina template

    NASA Astrophysics Data System (ADS)

    Taheriniya, Shabnam; Parhizgar, Sara Sadat; Sari, Amir Hossein

    2018-06-01

    To study the alumina template pore size distribution as a function of Al thin film grain size distribution, porous alumina templates were prepared by anodizing sputtered aluminum thin films. To control the grain size the aluminum samples were sputtered with the rate of 0.5, 1 and 2 Å/s and the substrate temperature was either 25, 75 or 125 °C. All samples were anodized for 120 s in 1 M sulfuric acid solution kept at 1 °C while a 15 V potential was being applied. The standard deviation value for samples deposited at room temperature but with different rates is roughly 2 nm in both thin film and porous template form but it rises to approximately 4 nm with substrate temperature. Samples with the average grain size of 13, 14, 18.5 and 21 nm respectively produce alumina templates with an average pore size of 8.5, 10, 15 and 16 nm in that order which shows the average grain size limits the average pore diameter in the resulting template. Lateral correlation length and grain boundary effect are other factors that affect the pore formation process and pore size distribution by limiting the initial current density.

  20. Fabrication and properties of (TbxY1-x)3Al5O12 transparent ceramics by hot isostatic pressing

    NASA Astrophysics Data System (ADS)

    Duan, Pingping; Liu, Peng; Xu, Xiaodong; Wang, Wei; Wan, Zhong; Zhang, Shouyi; Wang, Yinzhen; Zhang, Jian

    2017-10-01

    (TbxY1-x)3Al5O12 (x = 0.2, 0.5, 0.8) transparent ceramics were synthesized by a solid-state reaction and HIP. All the samples were pre-sintered at 1650 °C for 4 h in a muffle and later HIPed at 1650 °C for 3 h. (Tb0.2Y0.8)3Al5O12 transparent ceramics exhibit best microstructure with an average grain size of approximately 5.22 μm and optical transmittance of over 65% in the region of 500-1600 nm. Additionally, average grain sizes of all the samples are less than 10 μm. XRD scanning patterns indicate that only the (Tb0.8Y0.2)3Al5O12 samples have little secondary phases.

  1. Characterizing reduced sulfur compounds emissions from a swine concentrated animal feeding operation

    NASA Astrophysics Data System (ADS)

    Rumsey, Ian C.; Aneja, Viney P.; Lonneman, William A.

    2014-09-01

    Reduced sulfur compounds (RSCs) emissions from concentrated animal feeding operations (CAFOs) have become a potential environmental and human health concern, as a result of changes in livestock production methods. RSC emissions were determined from a swine CAFO in North Carolina. RSC measurements were made over a period of ≈1 week from both the barn and lagoon during each of the four seasonal periods from June 2007 to April 2008. During sampling, meteorological and other environmental parameters were measured continuously. Seasonal hydrogen sulfide (H2S) barn concentrations ranged from 72 to 631 ppb. Seasonal dimethyl sulfide (DMS; CH3SCH3) and dimethyl disulfide (DMDS; CH3S2CH3) concentrations were 2-3 orders of magnitude lower, ranging from 0.18 to 0.89 ppb and 0.47 to 1.02 ppb, respectively. The overall average barn emission rate was 3.3 g day-1 AU-1 (AU (animal unit) = 500 kg of live animal weight) for H2S, which was approximately two orders of magnitude higher than the DMS and DMDS overall average emissions rates, determined as 0.017 g day-1 AU-1 and 0.036 g day-1 AU-1, respectively. The overall average lagoon flux was 1.33 μg m-2 min-1 for H2S, which was approximately an order of magnitude higher than the overall average DMS (0.12 μg m-2 min-1) and DMDS (0.09 μg m-2 min-1) lagoon fluxes. The overall average lagoon emission for H2S (0.038 g day-1 AU-1) was also approximately an order of magnitude higher than the overall average DMS (0.0034 g day-1 AU-1) and DMDS (0.0028 g day-1 AU-1) emissions. H2S, DMS and DMDS have offensive odors and low odor thresholds. Over all four sampling seasons, 77% of 15 min averaged H2S barn concentrations were an order of magnitude above the average odor threshold. During these sampling periods, however, DMS and DMDS concentrations did not exceed their odor thresholds. The overall average barn and lagoon emissions from this study were used to help estimate barn, lagoon and total (barn + lagoon) RSC emissions from swine CAFOs in North Carolina. Total (barn + lagoon) H2S emissions from swine CAFOs in North Carolina were estimated to be 1.22*106 kg yr-1. The barns had significantly higher H2S emissions than the lagoons, contributing ≈98% of total North Carolina H2S swine CAFO emissions. Total (barn + lagoon) emissions for DMS and DMDS were 1-2 orders of magnitude lower, with barns contributing ≈86% and ≈93% of total emissions, respectively. H2S swine CAFO emissions were estimated to contribute ≈18% of North Carolina H2S emissions.

  2. Radioactivity in returned lunar materials

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The H-3, Ar-37, and Ar-39 radioactivities were measured at several depths in the large documented lunar rocks 14321 and 15555. The comparison of the Ar-37 activities from similar locations in rocks 12002, 14321, and 15555 gives direct measures of the amount of Ar-37 produced by the 2 November 1969 and 24 January 1971 solar flares. The tritium contents in the documented rocks decreased with increasing depths. The solar flare intensity averaged over 30 years obtained from the tritium depth dependence was approximately the same as the flare intensity averaged over 1000 years obtained from the Ar-37 measurements. Radioactivities in two Apollo 15 soil samples, H-3 in several Surveyor 3 samples, and tritium and radon weepage were also measured.

  3. Temporal patterns of Deepwater Horizon impacts on the benthic infauna of the northern Gulf of Mexico continental slope

    PubMed Central

    Baguley, Jeffrey G.; Conrad-Forrest, Nathan; Cooksey, Cynthia; Hyland, Jeffrey L.; Lewis, Christopher; Montagna, Paul A.; Ricker, Robert W.; Rohal, Melissa; Washburn, Travis

    2017-01-01

    The Deepwater Horizon oil spill occurred in spring and summer 2010 in the northern Gulf of Mexico. Research cruises in 2010 (approximately 2–3 months after the well had been capped), 2011, and 2014 were conducted to determine the initial and subsequent effects of the oil spill on deep-sea soft-bottom infauna. A total of 34 stations were sampled from two zones: 20 stations in the “impact” zone versus 14 stations in the “non-impact” zone. Chemical contaminants were significantly different between the two zones. Polycyclic aromatic hydrocarbons averaged 218 ppb in the impact zone compared to 14 ppb in the non-impact zone. Total petroleum hydrocarbons averaged 1166 ppm in the impact zone compared to 102 ppm in the non-impact zone. While there was no difference between zones for meiofauna and macrofauna abundance, community diversity was significantly lower in the impact zone. Meiofauna taxa richness over the three sampling periods averaged 8 taxa/sample in the impact zone, compared to 10 taxa/sample in the non-impact zone; and macrofauna richness averaged 25 taxa/sample in the impact zone compared to 30 taxa/sample in the non-impact zone. Oil originating from the Deepwater Horizon oil spill reached the seafloor and had a persistent negative impact on diversity of soft-bottom, deep-sea benthic communities. While there are signs of recovery for some benthic community variables, full recovery has not yet occurred four years after the spill. PMID:28640913

  4. The average X-ray/gamma-ray spectra of Seyfert galaxies from Ginga and OSSE and the origin of the cosmic X-ray background

    NASA Technical Reports Server (NTRS)

    Zdziarski, Andrzej A.; Johnson, W. Neil; Done, Chris; Smith, David; Mcnaron-Brown, Kellie

    1995-01-01

    We have obtained the first average 2-500 keV spectra of Seyfert galaxies, using the data from Ginga and Compton Gamma-Ray Observatory's (CGRO) Oriented Scintillation Spectrometer Experiment (OSSE). Our sample contains three classes of objects with markedly different spectra: radio-quiet Seyfert 1's and 2's, and radio-loud Seyfert 1's. The average radio-quiet Seyfert 1 spectrum is well-fitted by a power law continuum with the energy spectral index alpha approximately equals 0.9, a Compton reflection component corresponding to a approximately 2 pi covering solid angle, and ionized absorption. There is a high-energy cutoff in the incident power law continuum: the e-folding energy is E(sub c) approximately equals 0.6(sup +0.8 sub -0.3) MeV. The simplest model that describes this spectrum is Comptonization in a relativistic optically-thin thermal corona above the surface of an accretion disk. Radio-quiet Seyfert 2's show strong netural absorption, and there is an indication that their X-ray power laws are intrinsically harder. Finally, the radio-loud Seyfert spectrum has alpha approximately equals 0.7, moderate neutral absorption E(sub C) = 0.4(sup +0.7 sub -0.2) MeV, and no or little Compton reflection. This is incompatible with the radio-quiet Seyfert 1 spectrum, and probably indicating that the X-rays are beamed away from the accretion disk in these objects. The average spectra of Seyferts integrated over redshift with a power-law evolution can explain the hard X-ray spectrum of the cosmic background.

  5. A Relationship Between Visible and Near-IR Global Spectral Reflectance based on DSCOVR/EPIC

    NASA Astrophysics Data System (ADS)

    Wen, G.; Marshak, A.; Song, W.; Knyazikhin, Y.

    2017-12-01

    The launch of Deep Space Climate Observatory (DSCOVR) to the Earth's first Lagrange point (L1) allows us to see a new perspective of the Earth. The Earth Polychromatic Imaging Camera (EPIC) on the DSCOVR measures the back scattered radiation of the entire sunlit side of the Earth at 10 narrow band wavelengths ranging from ultraviolet to visible and near-infrared. We analyzed EPIC global averaged reflectance data. We found that the global averaged visible reflectance has a unique non-linear relationship with near infrared (NIR) reflectance. This non-linear relationship was not observed by any other satellite observations due to a limited spatial and temporal coverage of either low earth orbit (LEO) or geostationary satellite. The non-linear relationship is associated with the changing in the coverages of ocean, cloud, land, and vegetation as the Earth rotates. We used Terra and Aqua MODIS daily global radiance data to simulate EPIC observations. Since MODIS samples the Earth in a limited swath (2330km cross track) at a specific local time (10:30 am for Terra, 1:30 pm for Aqua) with approximately 15 orbits per day, the global average reflectance at a given time may be approximated by averaging the reflectance in the MODIS nearest-time swaths in the sunlit hemisphere. We found that MODIS simulated global visible and NIR spectral reflectance captured the major feature of the EPIC observed non-linear relationship with some errors. The difference between the two is mainly due to the sampling limitation of polar satellite. This suggests that that EPIC observations can be used to reconstruct MODIS global average reflectance time series for studying Earth system change in the past decade.

  6. Stochastic and deterministic models for agricultural production networks.

    PubMed

    Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D

    2007-07-01

    An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.

  7. On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation

    NASA Astrophysics Data System (ADS)

    Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw

    2012-12-01

    We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.

  8. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  9. Relationships between Perron-Frobenius eigenvalue and measurements of loops in networks

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kou, Yingxin; Li, Zhanwu; Xu, An; Chang, Yizhe

    2018-07-01

    The Perron-Frobenius eigenvalue (PFE) is widely used as measurement of the number of loops in networks, but what exactly the relationship between the PFE and the number of loops in networks is has not been researched yet, is it strictly monotonically increasing? And what are the relationships between the PFE and other measurements of loops in networks? Such as the average loop degree of nodes, and the distribution of loop ranks. We make researches on these questions based on samples of ER random network, NW small-world network and BA scale-free network, and the results confirm that, both the number of loops in network and the average loop degree of nodes of all samples do increase with the increase of the PFE in general trend, but neither of them are strictly monotonically increasing, so the PFE is capable to be used as a rough estimative measurement of the number of loops in networks and the average loop degree of nodes. Furthermore, we find that a majority of the loop ranks of all samples obey Weibull distribution, of which the scale parameter A and the shape parameter B have approximate power-law relationships with the PFE of the samples.

  10. Assessment of Density Variations of Marine Sediments with Ocean and Sediment Depths

    PubMed Central

    Tenzer, R.; Gladkikh, V.

    2014-01-01

    We analyze the density distribution of marine sediments using density samples taken from 716 drill sites of the Deep Sea Drilling Project (DSDP). The samples taken within the upper stratigraphic layer exhibit a prevailing trend of the decreasing density with the increasing ocean depth (at a rate of −0.05 g/cm3 per 1 km). Our results confirm findings of published studies that the density nonlinearly increases with the increasing sediment depth due to compaction. We further establish a 3D density model of marine sediments and propose theoretical models of the ocean-sediment and sediment-bedrock density contrasts. The sediment density-depth equation approximates density samples with an average uncertainty of about 10% and better represents the density distribution especially at deeper sections of basin sediments than a uniform density model. The analysis of DSDP density data also reveals that the average density of marine sediments is 1.70 g/cm3 and the average density of the ocean bedrock is 2.9 g/cm3. PMID:24744686

  11. Estimation of the vortex length scale and intensity from two-dimensional samples

    NASA Technical Reports Server (NTRS)

    Reuss, D. L.; Cheng, W. P.

    1992-01-01

    A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.

  12. Optimized nested Markov chain Monte Carlo sampling: theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less

  13. Floating plastic debris in the Central and Western Mediterranean Sea.

    PubMed

    Ruiz-Orejón, Luis F; Sardá, Rafael; Ramis-Pujol, Juan

    2016-09-01

    In two sea voyages throughout the Mediterranean (2011 and 2013) that repeated the historical travels of Archduke Ludwig Salvator of Austria (1847-1915), 71 samples of floating plastic debris were obtained with a Manta trawl. Floating plastic was observed in all the sampled sites, with an average weight concentration of 579.3 g dw km(-2) (maximum value of 9298.2 g dw km(-2)) and an average particle concentration of 147,500 items km(-2) (the maximum concentration was 1,164,403 items km(-2)). The plastic size distribution showed microplastics (<5 mm) in all the samples. The most abundant particles had a surface area of approximately 1 mm(2) (the mesh size was 333 μm). The general estimate obtained was a total value of 1455 tons dw of floating plastic in the entire Mediterranean region, with various potential spatial accumulation areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Anopheles gambiae complex (Diptera:Culicidae) near Bissau City, Guinea Bissau, West Africa.

    PubMed

    Fonseca, L F; Di Deco, M A; Carrara, G C; Dabo, I; Do Rosario, V; Petrarca, V

    1996-11-01

    Cytogenetic studies on mosquitoes collected inside bednets near Bissau City confirmed the presence of Anopheles melas Theobald and An. gambiae Giles sensu stricto, the latter species prevailing in rainy season samples (approximately 80% in average) and the former in dry season samples (> 90%). Seasonal and ecogeographical variations in the frequency of species and chromosomal inversions were analyzed. The analysis of An. gambiae sensu stricto confirmed the existence of the Bissau chromosomal form. The deficiency of heterokaryotypes in most samples indicated the possible coexistence of another chromosomal form not completely panmictic (i.e., randomly mating) with the Bissau form.

  15. Chance-constrained economic dispatch with renewable energy and storage

    DOE PAGES

    Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.; ...

    2018-04-19

    Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less

  16. Chance-constrained economic dispatch with renewable energy and storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.

    Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less

  17. Biomass fuel use and the exposure of children to particulate air pollution in southern Nepal

    PubMed Central

    Devakumar, D.; Semple, S.; Osrin, D.; Yadav, S.K.; Kurmi, O.P.; Saville, N.M.; Shrestha, B.; Manandhar, D.S.; Costello, A.; Ayres, J.G.

    2014-01-01

    The exposure of children to air pollution in low resource settings is believed to be high because of the common use of biomass fuels for cooking. We used microenvironment sampling to estimate the respirable fraction of air pollution (particles with median diameter less than 4 μm) to which 7–9 year old children in southern Nepal were exposed. Sampling was conducted for a total 2649 h in 55 households, 8 schools and 8 outdoor locations of rural Dhanusha. We conducted gravimetric and photometric sampling in a subsample of the children in our study in the locations in which they usually resided (bedroom/living room, kitchen, veranda, in school and outdoors), repeated three times over one year. Using time activity information, a 24-hour time weighted average was modeled for all the children in the study. Approximately two-thirds of homes used biomass fuels, with the remainder mostly using gas. The exposure of children to air pollution was very high. The 24-hour time weighted average over the whole year was 168 μg/m3. The non-kitchen related samples tended to show approximately double the concentration in winter than spring/autumn, and four times that of the monsoon season. There was no difference between the exposure of boys and girls. Air pollution in rural households was much higher than the World Health Organization and the National Ambient Air Quality Standards for Nepal recommendations for particulate exposure. PMID:24533994

  18. Enhanced Sampling in the Well-Tempered Ensemble

    NASA Astrophysics Data System (ADS)

    Bonomi, M.; Parrinello, M.

    2010-05-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  19. Enhanced sampling in the well-tempered ensemble.

    PubMed

    Bonomi, M; Parrinello, M

    2010-05-14

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi, J. Comput. Chem. 30, 1615 (2009)]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  20. Screening of European coffee final products for occurrence of ochratoxin A (OTA).

    PubMed

    vd Stegen, G; Jörissen, U; Pittet, A; Saccon, M; Steiner, W; Vincenzi, M; Winkler, M; Zapp, J; Schlatter, C

    1997-04-01

    Samples (633) of final coffee products were drawn from the markets of different European countries relative to the market share of each product type and brand. These samples were analysed in a cooperative action with nine different laboratories. With low limits of detection (mean detection limit approximately 0.5 ng/g) no OTA was found in over half of the samples (334 negatives). In the remaining samples occurrence of OTA at a rather low level was seen. Only four samples (all instants) exceeded a level of 10 ng/g, whereas for both instants, and roast and grounds (R & G), over three-quarters of the samples were in the range from nondetectable to 1 ng/g. The overall mean for all R & Gs was 0.8 ng/g and for all instant 1.3 ng/g (for samples in which no OTA was detected, half of the detection limit was included in this calculation). In the brewing methods frequently used in Europe the OTA is essentially fully extracted. Consumption of four cups of coffee per day (approximately 24 g R & G or approximately 8 g instant coffee) contributes on average 19 or 10 ng/day respectively. Four cups/day is above the per caput consumption level in most European contries. Compared with the Provisional Tolerable Weekly Intake (PTWI) recently set by the Joint FAO/WHO Expert Committee on Food Additives at 100 ng/kg bodyweight/week, consumption of 28 cups/week contributes up to 2% to the PTWI.

  1. Contents and leachability of heavy metals (Pb, Cu, Sb, Zn, As) in soil at the Pantex firing range, Amarillo, Texas.

    PubMed

    Basunia, S; Landsberger, S

    2001-10-01

    Pantex firing range soil samples were analyzed for Pb, Cu, Sb, Zn, and As. One hundred ninety-seven samples were collected from the firing range and vicinity area. There was a lack of knowledge about the distribution of Pb in the firing range, so a random sampling with proportional allocation was chosen. Concentration levels of Pb and Cu in the firing range were found to be in the range of 11-4675 and 13-359 mg/kg, respectively. Concentration levels of Sb were found to be in the range of 1-517 mg/kg. However, the Zn and As concentration levels were close to average soil background levels. The Sn concentration level was expected to be higher in the Pantex firing range soil samples. However, it was found to be below the neutron activation analysis (NAA) detection limit of 75 mg/kg. Enrichment factor analysis showed that Pb and Sb were highly enriched in the firing range with average magnitudes of 55 and 90, respectively. Cu was enriched approximately 6 times more than the usual soil concentration levels. Toxicity characteristic leaching procedure (TCLP) was carried out on size-fractionated homogeneous soil samples. The concentration levels of Pb in leachates were found to be approximately 12 times higher than the U.S. Environmental Protection Agency (EPA) regulatory concentration level of 5 mg/L. Sequential extraction (SE) was also performed to characterize Pb and other trace elements into five different fractions. The highest Pb fraction was found with organic matter in the soil.

  2. Amalgam Electrode-Based Electrochemical Detector for On-Site Direct Determination of Cadmium(II) and Lead(II) from Soils

    PubMed Central

    Nejdl, Lukas; Kynicky, Jindrich; Brtnicky, Martin; Vaculovicova, Marketa; Adam, Vojtech

    2017-01-01

    Toxic metal contamination of the environment is a global issue. In this paper, we present a low-cost and rapid production of amalgam electrodes used for determination of Cd(II) and Pb(II) in environmental samples (soils and wastewaters) by on-site analysis using difference pulse voltammetry. Changes in the electrochemical signals were recorded with a miniaturized potentiostat (width: 80 mm, depth: 54 mm, height: 23 mm) and a portable computer. The limit of detection (LOD) was calculated for the geometric surface of the working electrode 15 mm2 that can be varied as required for analysis. The LODs were 80 ng·mL−1 for Cd(II) and 50 ng·mL−1 for Pb(II), relative standard deviation, RSD ≤ 8% (n = 3). The area of interest (Dolni Rozinka, Czech Republic) was selected because there is a deposit of uranium ore and extreme anthropogenic activity. Environmental samples were taken directly on-site and immediately analysed. Duration of a single analysis was approximately two minutes. The average concentrations of Cd(II) and Pb(II) in this area were below the global average. The obtained values were verified (correlated) by standard electrochemical methods based on hanging drop electrodes and were in good agreement. The advantages of this method are its cost and time effectivity (approximately two minutes per one sample) with direct analysis of turbid samples (soil leach) in a 2 M HNO3 environment. This type of sample cannot be analyzed using the classical analytical methods without pretreatment. PMID:28792458

  3. Drug product selection: the Florida experience.

    PubMed Central

    Vuturo, G J; Krischer, J P; McCormick, W C

    1980-01-01

    Drug product selection, the act of selecting and dispensing a lower cost generically equivalent product to that prescribed, is made possible in 46 states through recently enacted legislation. Florida's legislation is unique in that it requires pharmacists to product select under certain circumstances. This study reports on the results of a review of the Florida experience approximately one year after enactment of its drug product selection legislation. Nearly 132,000 prescriptions were sampled from 60 pharmacies during a four-month study period. This represents one per cent of all new prescriptions in the state and a three per cent sample of community pharmacies. Study results indicate that drug product selection on the average saves the consumer $1.92 per prescription. Further, under the provisions of this law the majority of cost savings (average reductions in acquisition costs between prescribed and dispensed products) are being passed along as savings to the consumer. During the four-month study period this amounted to a total prescription cost savings of nearly $425,000. Drug product selection occurs in approximately two per cent of all new prescriptions which compares favorably with results reported from other states but also suggests that additional savings can be realized under such legislation. PMID:7377418

  4. Average volume of alcohol consumption and all-cause mortality in African Americans: the NHEFS cohort.

    PubMed

    Sempos, Christopher T; Rehm, Jürgen; Wu, Tiejian; Crespo, Carlos J; Trevisan, Maurizio

    2003-01-01

    To analyze the relationship between average volume of alcohol consumption and all-cause mortality in African Americans. Prospective cohort study--the NHANES Epidemiologic Follow-Up Study (NHEFS)--with baseline data collected 1971 through 1975 as part of the first National Health and Nutrition Examination Survey (NHANES I) and follow-up through 1992. The analytic data set consisted of 2054 African American men (n = 768) and women (n = 1,286), 25 to 75 years of age, who were followed for approximately 19 years. Alcohol was measured with a quantity-frequency measure at baseline. All-cause mortality. No J-shaped curve was found in the relationship between average volume of alcohol consumption and mortality for male or female African Americans. Instead, no beneficial effect appeared and mortality increased with increasing average consumption for more than one drink a day. The reason for not finding the J-shape in African Americans may be the result of the more detrimental drinking patterns in this ethnicity and consequently the lack of protective effects of alcohol on coronary heart disease. Taking into account sampling design did not substantially change the results from the models, which assumed a simple random sample. If this result can be confirmed in other samples, alcohol policy, especially prevention, should better incorporate patterns of drinking into programs.

  5. Measurement of gluconeogenesis using glucose fragments and mass spectrometry after ingestion of deuterium oxide.

    PubMed

    Chacko, Shaji K; Sunehag, Agneta L; Sharma, Susan; Sauer, Pieter J J; Haymond, Morey W

    2008-04-01

    We report a new method to measure the fraction of glucose derived from gluconeogenesis using gas chromatography-mass spectrometry and positive chemical ionization. After ingestion of deuterium oxide by subjects, glucose derived from gluconeogenesis is labeled with deuterium. Our calculations of gluconeogenesis are based on measurements of the average enrichment of deuterium on carbon 1, 3, 4, 5, and 6 of glucose and the deuterium enrichment in body water. In a sample from an adult volunteer after ingestion of deuterium oxide, fractional gluconeogenesis using the "average deuterium enrichment method" was 48.3 +/- 0.5% (mean +/- SD) and that with the C-5 hexamethylenetetramine (HMT) method by Landau et al. (Landau BR, Wahren J, Chandramouli V, Schumann WC, Ekberg K, Kalhan SC; J Clin Invest 98: 378-385, 1996) was 46.9 +/- 5.4%. The coefficient of variation of 10 replicate analyses using the new method was 1.0% compared with 11.5% for the C-5 HMT method. In samples derived from an infant receiving total parenteral nutrition, fractional gluconeogenesis was 13.3 +/- 0.3% using the new method and 13.7 +/- 0.8% using the C-5 HMT method. Fractional gluconeogenesis measured in six adult volunteers after 66 h of continuous fasting was 83.7 +/- 2.3% using the new method and 84.2 +/- 5.0% using the C-5 HMT method. In conclusion, the average deuterium enrichment method is simple, highly reproducible, and cost effective. Furthermore, it requires only small blood sample volumes. With the use of an additional tracer, glucose rate of appearance can also be measured during the same analysis. Thus the new method makes measurements of gluconeogenesis available and affordable to large numbers of investigators under conditions of low and high fractional gluconeogenesis ( approximately 10 to approximately 90) in all subject populations.

  6. The use and performance of BioSand filters in the Artibonite Valley of Haiti: a field study of 107 households.

    PubMed

    Duke, W F; Nordin, R N; Baker, D; Mazumder, A

    2006-01-01

    Approximately one billion people world-wide lack access to adequate amounts of safe water. Most are in developing countries, especially in rapidly expanding urban fringes, poor rural areas, and indigenous communities. In February and March 2005, a field study of 107 households was conducted to evaluate the use and performance of the Manz BioSand filter in the Artibonite Valley of Haiti. Approximately 2000 filters had been installed in this area over the preceding 5 years by the staff in Community Development at Hospital Albert Schweitzer, Deschappelle, Haiti. Interviews, observations, and water samplings were carried-out by two teams of Haitian enumerators, each consisting of a nurse and a filter technician. Water analyses were performed by Haitian lab technicians using the membrane filtration method to determine Escherichia coli counts. The enumerators and the lab technicians completed a 2 week training program before beginning the study; they worked under the direct supervision of the primary investigator. Laboratory quality was monitored by running 10% blank and 10% duplicate samples. The households contained an average of 5.4 persons. Filters had been in use for an average of 2.5 years, and participants were generally satisfied with their filter's performance. Shallow, hand-dug wells provided the only source of water for 61% of the households, with 26% using water piped from springs or deep wells, and 13% having access to both. Only 3% had plumbing in their homes. Source water from shallow wells contained an average of 234 E. coli cfu/100 mL. Piped sources averaged 195 E. coli cfu/100 mL. Of the source water samples 26% contained 0-10 E. coli cfu/100 mL. Of the filtered water samples 97% contained 0-10 E. coli cfu/100 mL (80% with 0 cfu/100 mL, and 17% with 1-10 cfu/100 mL). Overall bacterial removal efficiency for the filters was calculated to be 98.5%. Turbidity decreased from an average of 6.2 NTU in source water samples to 0.9 NTU in the filtered water. None of the households treated the water after filtering; 91% used the filtered water only for drinking. No problems related to filter construction were observed; 13% were found to have significantly decreased flow rates (all restored by cleaning the filter). Recontamination was found to occur, with only 3% of the samples from the filters' spouts containing >10 E. coli cfu/100 mL and 22% of the stored filtered water samples at point-of-use containing >10 cfu/100 mL. The Manz BioSand filters are an attractive option for supplying water treatment to family units in rural areas of poorly developed countries.

  7. Contamination and human health risk of lead in soils around lead/zinc smelting areas in China.

    PubMed

    Lei, Kai; Giubilato, Elisa; Critto, Andrea; Pan, Huiyun; Lin, Chunye

    2016-07-01

    Pb/Zn smelting, an important economic activity in China, has led to heavy environmental pollution. This research reviewed studies on soil Pb contamination at Pb/Zn smelting sites in China published during the period of 2000 to 2015 to clarify the total levels, spatial changes, and health risks for Pb contamination in soils at local and national scales. The results show that Pb contents in surface soils at 58 Pb/Zn smelting sites in China ranged from 7 to 312,452 mg kg(-1) with an arithmetic average, geometric average, and median of 1982, 404, and 428 mg kg(-1), respectively (n = 1011). Surface soil Pb content at these smelting sites decreased from an average of 2466 to 659 mg kg(-1), then to 463 mg kg(-1) as the distance from the smelters increased from <1000 to 1000∼2000 m, and then to >2000 m. With respect to variation with depth, the average soil Pb content at these sites gradually decreased from 986 mg kg(-1) at 0- to 20-cm depth to 144 mg kg(-1) at 80- to 100-cm depth. Approximately 78 % of the soil samples (n = 1011) at the 58 Pb/Zn smelting sites were classified as having high Pb pollution levels. Approximately 34.2 and 7.7 % of the soil samples (n = 1011) at the 58 Pb/Zn smelting sites might pose adverse health effects and high chronic risks to children, respectively. The Pb/Zn smelting sites in the southwest and southeast provinces of China, as well as Liaoning province, were most contaminated and thus should receive priority for remediation.

  8. Characterization and enzymatic hydrolysis of hydrothermally treated β-1,3-1,6-glucan from Aureobasidium pullulans.

    PubMed

    Hirabayashi, Katsuki; Kondo, Nobuhiro; Hayashi, Sachio

    2016-12-01

    The chemical structure of hydrothermally treated β-1,3-1,6-glucan from Aureobasidium pullulans was characterized using techniques such as gas chromatography/mass spectrometry (GC/MS) and nuclear magnetic resonance (NMR). The chemical shifts of anomeric carbons observed in the 13 C-NMR spectra suggested the presence of single flexible chains of polysaccharide in the sample. β-1,3-1,6-Glucan from A. pullulans became water-soluble, with an average molecular weight of 128,000 Da after hydrothermal treatment, and the solubility in water was approximately 10% (w/w). Sample (3% w/v) was completely hydrolyzed to glucose by enzymatic reaction with Lysing enzymes from Trichoderma harzianum. Gentiobiose (Glcβ1 → 6Glc) and glucose were released as products during the reaction, and the maximum yield of gentiobiose was approximately 70% (w/w). The molar ratio of gentiobiose to glucose after 1 h reaction suggested that the sample is likely highly branched. Sample (3% w/v) was also hydrolyzed to glucose by Uskizyme from Trichoderma sp., indicating that it is very sensitive to enzymatic hydrolysis.

  9. Comparing Gravimetric and Real-Time Sampling of PM2.5 Concentrations Inside Truck Cabins

    PubMed Central

    Zhu, Ying; Smith, Thomas J.; Davis, Mary E.; Levy, Jonathan I.; Herrick, Robert; Jiang, Hongyu

    2012-01-01

    As part of a study on truck drivers’ exposure and health risk, pickup and delivery (P&D) truck drivers’ on-road exposure patterns to PM2.5 were assessed in five weeklong sampling trips in metropolitan areas of five U.S. cities from April to August of 2006. Drivers were sampled with real-time (DustTrak) and gravimetric samplers to measure average in-cabin PM2.5 concentrations and to compare their correspondence in moving trucks. In addition, GPS measurements of truck locations, meteorological data, and driver behavioral data were collected throughout the day to determine which factors influence the relationship between real-time and gravimetric samplers. Results indicate that the association between average real-time and gravimetric PM2.5 measurements on moving trucks was fairly consistent (Spearman rank correlation of 0.63), with DustTrak measurements exceeding gravimetric measurements by approximately a factor of 2. This ratio differed significantly only between the industrial Midwest cities and the other three sampled cities scattered in the South and West. There was also limited evidence of an effect of truck age. Filter samples collected concurrently with DustTrak measurements can be used to calibrate average mass concentration responses for the DustTrak, allowing for real-time measurements to be integrated into longer-term studies of inter-city and intra-urban exposure patterns for truck drivers. PMID:21991940

  10. Comparing gravimetric and real-time sampling of PM(2.5) concentrations inside truck cabins.

    PubMed

    Zhu, Ying; Smith, Thomas J; Davis, Mary E; Levy, Jonathan I; Herrick, Robert; Jiang, Hongyu

    2011-11-01

    As part of a study on truck drivers' exposure and health risk, pickup and delivery (P&D) truck drivers' on-road exposure patterns to PM(2.5) were assessed in five, weeklong sampling trips in metropolitan areas of five U.S. cities from April to August of 2006. Drivers were sampled with real-time (DustTrak) and gravimetric samplers to measure average in-cabin PM(2.5) concentrations and to compare their correspondence in moving trucks. In addition, GPS measurements of truck locations, meteorological data, and driver behavioral data were collected throughout the day to determine which factors influence the relationship between real-time and gravimetric samplers. Results indicate that the association between average real-time and gravimetric PM(2.5) measurements on moving trucks was fairly consistent (Spearman rank correlation of 0.63), with DustTrak measurements exceeding gravimetric measurements by approximately a factor of 2. This ratio differed significantly only between the industrial Midwest cities and the other three sampled cities scattered in the South and West. There was also limited evidence of an effect of truck age. Filter samples collected concurrently with DustTrak measurements can be used to calibrate average mass concentration responses for the DustTrak, allowing for real-time measurements to be integrated into longer-term studies of inter-city and intra-urban exposure patterns for truck drivers.

  11. A limit to the X-ray luminosity of nearby normal galaxies

    NASA Technical Reports Server (NTRS)

    Worrall, D. M.; Marshall, F. E.; Boldt, E. A.

    1979-01-01

    Emission is studied at luminosities lower than those for which individual discrete sources can be studied. It is shown that normal galaxies do not appear to provide the numerous low luminosity X-ray sources which could make up the 2-60 keV diffuse background. Indeed, upper limits suggest luminosities comparable with, or a little less than, that of the galaxy. This is consistent with the fact that the average optical luminosity of the sample galaxies within approximately 20 Mpc is slightly lower than that of the galaxy. An upper limit of approximately 1% of the diffuse background from such sources is derived.

  12. Convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks

    NASA Astrophysics Data System (ADS)

    Long, Yin; Zhang, Xiao-Jun; Wang, Kui

    2018-05-01

    In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.

  13. Total sperm per ejaculate of men: obtaining a meaningful value or a mean value with appropriate precision.

    PubMed

    Amann, Rupert P; Chapman, Phillip L

    2009-01-01

    We retrospectively mined and modeled data to answer 3 questions. 1) Relative to an estimate based on approximately 20 semen samples, how imprecise is an estimate of an individual's total sperm per ejaculate (TSperm) based on 1 sample? 2) What is the impact of abstinence interval on TSperm and TSperm/h? 3) How many samples are needed to provide a meaningful estimate of an individual's mean TSperm or TSperm/h? Data were for 18-20 consecutive masturbation samples from each of 48 semen donors. Modeling exploited the gamma distribution of values for TSperm and a unique approach to project to future samples. Answers: 1) Within-individual coefficients of variation were similar for TSperm or TSperm/h abstinence and ranged from 17% to 51%; average approximately 34%. TSperm or TSperm/h in any individual sample from a given donor was between -20% and +20% of the mean value in 48% of 18-20 samples per individual. 2) For a majority of individuals, TSperm increased in a nearly linear manner through approximately 72 hours of abstinence. TSperm and TSperm/h after 18-36 hours' abstinence are high. To obtain meaningful values for diagnostic purposes and maximize distinction of individuals with relatively low or high sperm production, the requested abstinence should be 42-54 hours with an upper limit of 64 hours. For individuals producing few sperm, 7 days or more of abstinence might be appropriate to obtain sperm for insemination. 3) At least 3 samples from a hypothetical future subject are recommended for most applications. Assuming 60 hours' abstinence, 80% confidence limits for TSperm/h for 1, 3, or 6 samples would be 70%-163%, 80%-130%, or 85%-120% of the mean for observed values. In only approximately 50% of cases would TSperm/h for a single sample be within -16% and +30% of the true mean value for that subject. Pooling values for TSperm in samples obtained after 18-36 or 72-168 hours' abstinence with values for TSperm obtained after 42-64 hours is inappropriate. Reliance on TSperm for a single sample per subject is unwise.

  14. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  15. Colonization potential to reconstitute a microbe community in patients detected early after fecal microbe transplant for recurrent C. difficile.

    PubMed

    Kumar, Ranjit; Maynard, Craig L; Eipers, Peter; Goldsmith, Kelly T; Ptacek, Travis; Grubbs, J Aaron; Dixon, Paula; Howard, Donna; Crossman, David K; Crowley, Michael R; Benjamin, William H; Lefkowitz, Elliot J; Weaver, Casey T; Rodriguez, J Martin; Morrow, Casey D

    2016-01-13

    Fecal microbiota transplants (FMT) are an effective treatment for patients with gut microbe dysbiosis suffering from recurrent C. difficile infections. To further understand how FMT reconstitutes the patient's gut commensal microbiota, we have analyzed the colonization potential of the donor, recipient and recipient post transplant fecal samples using transplantation in gnotobiotic mice. A total of nine samples from three human donors, recipient's pre and post FMT were transplanted into gnotobiotic mice. Microbiome analysis of three donor fecal samples revealed the presence of a high relative abundance of commensal microbes from the family Bacteriodaceae and Lachnospiraceae that were almost absent in the three recipient pre FMT fecal samples (<0.01%). The microbe composition in gnotobiotic mice transplanted with the donor fecal samples was similar to the human samples. The recipient samples contained Enterobacteriaceae, Lactobacillaceae, Enterococcaceae in relative abundance of 43, 11, 8%, respectively. However, gnotobiotic mice transplanted with the recipient fecal samples had an average relative abundance of unclassified Clostridiales of 55%, approximately 7000 times the abundance in the recipient fecal samples prior to transplant. Microbiome analysis of fecal samples from the three patients early (2-4 weeks) after FMT revealed a microbe composition with the relative abundance of both Bacteriodaceae and Lachnospiraceae that was approximately 7% of that of the donor. In contrast, gnotobioitc mice transplanted with the fecal samples obtained from the three at early times post FMT revealed increases in the relative abundance of Bacteriodaceae and Lachnospiraceae microbe compositions to levels similar to the donor fecal samples. Furthermore, the unclassified Clostridiales in the recipient samples post FMT was reduced to an average of 10%. We have used transplantation into gnotobiotic mice to evaluate the colonization potential of microbiota in FMT patients early after transplant. The commensal microbes present at early times post FMT out competed non-commensal microbes (e.g. such as unclassified Clostridiales) for niche space. The selective advantage of these commensal microbes to occupy niches in the gastrointestinal tract helps to explain the success of FMT to reconstitute the gut microbe community of patients with recurrent C. difficile infections.

  16. Changes in Cannabis Potency over the Last Two Decades (1995-2014) - Analysis of Current Data in the United States

    PubMed Central

    ElSohly, Mahmoud A.; Mehmedic, Zlatko; Foster, Susan; Gon, Chandrani; Chandra, Suman; Church, James C.

    2016-01-01

    BACKGROUND Marijuana is the most widely used illicit drug in the United States and all over the world. Reports indicate that the potency of cannabis preparation has been increasing. This report examines the concentration of cannabinoids in illicit cannabis products seized by DEA (drug and enforcement administration) over the last two decades, with particular emphasis on Δ9-THC and cannabidiol (CBD). METHODS Samples in this report are received over time from DEA confiscated materials and processed for analysis using a validated ‘gas chromatograph with flame ionization detector (GC/FID)’ method. RESULTS A total of 38,681samples of cannabis preparations were received and analyzed between January 1, 1995 and December 31, 2014. The data showed that, while the number of marijuana samples seized over the last four years has declined, the number of sinsemilla samples has increased. Overall, the potency of illicit cannabis plant material has consistently risen over time since 1995 from approximately 4% in 1995 to approximately 12% in 2014. On the other hand, the CBD content has fallen on average from approximately 0.28% in 2001 to <0.15% in 2014, resulting in a change in the ratio of THC to CBD from 14 times in 1995 to approximately 80 times in 2014. CONCLUSION It is concluded that there is a shift in the production of illicit cannabis plant material from regular marijuana to sinsemilla. This increase in potency poses higher risk of cannabis use, particularly among adolescents. PMID:26903403

  17. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    PubMed

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  18. Consistency of ARESE II Cloud Absorption Estimates and Sampling Issues

    NASA Technical Reports Server (NTRS)

    Oreopoulos, L.; Marshak, A.; Cahalan, R. F.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Data from three cloudy days (March 3, 21, 29, 2000) of the ARM Enhanced Shortwave Experiment II (ARESE II) were analyzed. Grand averages of broadband absorptance among three sets of instruments were compared. Fractional solar absorptances were approx. 0.21-0.22 with the exception of March 3 when two sets of instruments gave values smaller by approx. 0.03-0.04. The robustness of these values was investigated by looking into possible sampling problems with the aid of 500 nm spectral fluxes. Grand averages of 500 nm apparent absorptance cover a wide range of values for these three days, namely from a large positive (approx. 0.011) average for March 3, to a small negative (approximately -0.03) for March 21, to near zero (approx. 0.01) for March 29. We present evidence suggesting that a large part of the discrepancies among the three days is due to the different nature of clouds and their non-uniform sampling. Hence, corrections to the grand average broadband absorptance values may be necessary. However, application of the known correction techniques may be precarious due to the sparsity of collocated flux measurements above and below the clouds. Our analysis leads to the conclusion that only March 29 fulfills all requirements for reliable estimates of cloud absorption, that is, the presence of thick, overcast, homogeneous clouds.

  19. Decentralized Network Interdiction Games

    DTIC Science & Technology

    2015-12-31

    approach is termed as the sample average approximation ( SAA ) method, and theories on the asymptotic convergence to the original problem’s optimal...used in the SAA method’s convergence. While we provided detailed proof of such convergence in [P3], a side benefit of the proof is that it weakens the...conditions required when applying the general SAA approach to the block-structured stochastic programming problem 17. As the conditions known in the

  20. Comparison between Measured and Calculated Sediment Transport Rates in North Fork Caspar Creek, California

    NASA Astrophysics Data System (ADS)

    Kim, T. W.; Yarnell, S. M.; Yager, E.; Leidman, S. Z.

    2015-12-01

    Caspar Creek is a gravel-bedded stream located in the Jackson Demonstration State Forest in the coast range of California. The Caspar Creek Experimental Watershed has been actively monitored and studied by the Pacific Southwest Research Station and California Department of Forestry and Fire Protection for over five decades. Although total annual sediment yield has been monitored through time, sediment transport during individual storm events is less certain. At a study site on North Fork Caspar Creek, cross-section averaged sediment flux was collected throughout two storm events in December 2014 and February 2015 to determine if two commonly used sediment transport equations—Meyer-Peter-Müller and Wilcock—approximated observed bedload transport. Cross-section averaged bedload samples were collected approximately every hour during each storm event using a Helley-Smith bedload sampler. Five-minute composite samples were collected at five equally spaced locations along a cross-section and then sieved to half-phi sizes to determine the grain size distribution. The measured sediment flux values varied widely throughout the storm hydrographs and were consistently less than two orders of magnitude in value in comparison to the calculated values. Armored bed conditions, changing hydraulic conditions during each storm and variable sediment supply may have contributed to the observed differences.

  1. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  2. Prediction of hypertensive crisis based on average, variability and approximate entropy of 24-h ambulatory blood pressure monitoring.

    PubMed

    Schoenenberger, A W; Erne, P; Ammann, S; Perrig, M; Bürgi, U; Stuck, A E

    2008-01-01

    Approximate entropy (ApEn) of blood pressure (BP) can be easily measured based on software analysing 24-h ambulatory BP monitoring (ABPM), but the clinical value of this measure is unknown. In a prospective study we investigated whether ApEn of BP predicts, in addition to average and variability of BP, the risk of hypertensive crisis. In 57 patients with known hypertension we measured ApEn, average and variability of systolic and diastolic BP based on 24-h ABPM. Eight of these fifty-seven patients developed hypertensive crisis during follow-up (mean follow-up duration 726 days). In bivariate regression analysis, ApEn of systolic BP (P<0.01), average of systolic BP (P=0.02) and average of diastolic BP (P=0.03) were significant predictors of hypertensive crisis. The incidence rate ratio of hypertensive crisis was 14.0 (95% confidence interval (CI) 1.8, 631.5; P<0.01) for high ApEn of systolic BP as compared to low values. In multivariable regression analysis, ApEn of systolic (P=0.01) and average of diastolic BP (P<0.01) were independent predictors of hypertensive crisis. A combination of these two measures had a positive predictive value of 75%, and a negative predictive value of 91%, respectively. ApEn, combined with other measures of 24-h ABPM, is a potentially powerful predictor of hypertensive crisis. If confirmed in independent samples, these findings have major clinical implications since measures predicting the risk of hypertensive crisis define patients requiring intensive follow-up and intensified therapy.

  3. Gamma-Ray Burst Host Galaxies Have "Normal" Luminosities.

    PubMed

    Schaefer

    2000-04-10

    The galactic environment of gamma-ray bursts can provide good evidence about the nature of the progenitor system, with two old arguments implying that the burst host galaxies are significantly subluminous. New data and new analysis have now reversed this picture: (1) Even though the first two known host galaxies are indeed greatly subluminous, the next eight hosts have absolute magnitudes typical for a population of field galaxies. A detailed analysis of the 16 known hosts (10 with redshifts) shows them to be consistent with a Schechter luminosity function with R*=-21.8+/-1.0, as expected for normal galaxies. (2) Bright bursts from the Interplanetary Network are typically 18 times brighter than the faint bursts with redshifts; however, the bright bursts do not have galaxies inside their error boxes to limits deeper than expected based on the luminosities for the two samples being identical. A new solution to this dilemma is that a broad burst luminosity function along with a burst number density varying as the star formation rate will require the average luminosity of the bright sample (>6x1058 photons s-1 or>1.7x1052 ergs s-1) to be much greater than the average luminosity of the faint sample ( approximately 1058 photons s-1 or approximately 3x1051 ergs s-1). This places the bright bursts at distances for which host galaxies with a normal luminosity will not violate the observed limits. In conclusion, all current evidence points to gamma-ray burst host galaxies being normal in luminosity.

  4. Measuring herbicide volatilization from bare soil.

    PubMed

    Yates, S R

    2006-05-15

    A field experiment was conducted to measure surface dissipation and volatilization of the herbicide triallate after application to bare soil using micrometeorological, chamber, and soil-loss methods. The volatilization rate was measured continuously for 6.5 days and the range in the daily peak values for the integrated horizontal flux method was from 32.4 (day 5) to 235.2 g ha(-1) d(-1) (day 1), for the theoretical profile shape method was from 31.5 to 213.0 g ha(-1) d(-1), and for the flux chamber was from 15.7 to 47.8 g ha(-1) d(-1). Soil samples were taken within 30 min after application and the measured mass of triallate was 8.75 kg ha(-1). The measured triallate mass in the soil at the end of the experiment was approximately 6 kg ha(-1). The triallate dissipation rate, obtained by soil sampling, was approximately 334 g ha(-1) d(-1) (98 g d(-1)) and the average rate of volatilization was 361 g ha(-1) d(-1). Soil sampling at the end of the experiment showed that approximately 31% (0.803 kg/2.56 kg) of the triallate mass was lost from the soil. Significant volatilization of triallate is possible when applied directly to the soil surface without incorporation.

  5. Short-term variability and long-term change in the composition of the littoral zone fish community in Spirit Lake, Iowa

    USGS Publications Warehouse

    Pierce, C.L.; Sexton, M.D.; Pelham, M.E.; Larscheid, J.G.

    2001-01-01

    We assessed short-term variability and long-term change in the composition of the littoral fish community in Spirit Lake, Iowa. Fish were sampled in several locations at night with large beach seines during spring, summer and fall of 1995-1998. Long-term changes were inferred from comparison with a similar study conducted over 70 y earlier in Spirit Lake. We found 26 species in the littoral zone. The number of species per sample ranged from 4 to 18, averaging 11.8. The average number of species per sample was higher at stations with greater vegetation density. A distinct seasonal pattern was evident in the number of species collected per sample in most years, increasing steadily from spring to fall. Patterns of variability within our 1995-1998 study period suggest that: (1) numerous samples are necessary to adequately characterize a littoral fish community, (2) sampling should be done when vegetation and young-of-year densities are highest and (3) sampling during a single year is inadequate to reveal the full community. The number of native species has declined by approximately 25% over the last 70 y. A coincident decline in littoral vegetation and associated habitat changes during the same period are likely causes of the long-term community change.

  6. Year-to-year variations in annual average indoor 222Rn concentrations.

    PubMed

    Martz, D E; Rood, A S; George, J L; Pearson, M D; Langner, G H

    1991-09-01

    Annual average indoor 222Rn concentrations in 40 residences in and around Grand Junction, CO, have been measured repeatedly since 1984 using commercial alpha-track monitors (ATM) deployed for successive 12-mo time periods. Data obtained provide a quantitative measure of the year-to-year variations in the annual average Rn concentrations in these structures over this 6-y period. A mean coefficient of variation of 25% was observed for the year-to-year variability of the measurements at 25 sampling stations for which complete data were available. Individual coefficients of variation at the various stations ranged from a low of 7.7% to a high of 51%. The observed mean coefficient of variation includes contributions due to the variability in detector response as well as the true year-to-year variation in the annual average Rn concentrations. Factoring out the contributions from the measured variability in the response of the detectors used, the actual year-to-year variability of the annual average Rn concentrations was approximately 22%.

  7. Improving the analysis of composite endpoints in rare disease trials.

    PubMed

    McMenamin, Martina; Berglind, Anna; Wason, James M S

    2018-05-22

    Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.

  8. Data collection for a time-of-travel and dispersion study on the Coosa River near Childersburg, Alabama

    USGS Publications Warehouse

    Gardner, R.A.

    1985-01-01

    Approximately 2,300 dye-tracer samples were collected and analyzed during a 5-day time-of-travel study on a 23-mile reach of the Coosa River between Logan Martin and Lay dams near Childersburg, Alabama, October 27 to 31, 1984. Rhodamine WT was used as the tracer-dye. Unsteady flow conditions prevailed in the study reach. The rate of movement of the dye cloud between sampling cross sections ranged from 0.15 to 1.36 feet per second. The average rate of movement of the dye cloud between the injection cross section and the downstream sampling cross section was 0.42 foot per second. (USGS)

  9. Ammonia and Carbon Dioxide Concentrations in Disposable and Reusable Ventilated Mouse Cages

    PubMed Central

    Silverman, Jerald; Bays, David W; Cooper, Sheldon F; Baker, Stephen P

    2008-01-01

    This study compares resuable and disposable individually ventilated mouse cages in terms of the formation of intracage CO2 and NH3. Crl:CD-1(ICR) female mice were placed in either disposable or reusable ventilated cages in a positive pressure animal rack. Intracage CO2 and NH3 were measured once daily for 9 d; temperature and relative humidity were monitored for the first 7 d. Results indicated higher CO2 levels in the rear of the disposable cages and in the front of the reusable cages. This pattern corresponded to where the mice tended to congregate. However, CO2 concentrations did not differ significantly between the 2 cage types. Average CO2 levels in both cage types never exceeded approximately 3000 ppm. Intracage NH3 began to rise in the reusable cages on day 4, reached approximately 50 ppm by day 5 and by day 9 was greater than 150 ppm at the cages' rear sampling port while remaining at approximately 70 ppm at the front sampling port. Intracage NH3 levels in the disposable cages remained less than or equal to 3.2 ppm. Intracage temperature and relative humidity were approximately the same in both cage types. We concluded that the disposable ventilated cage performed satisfactorily under the conditions of the study. PMID:18351723

  10. Retail trade incentives: how tobacco industry practices compare with those of other industries.

    PubMed Central

    Feighery, E C; Ribisl, K M; Achabal, D D; Tyebjee, T

    1999-01-01

    OBJECTIVES: This study compared the incentive payments for premium shelf space and discounts on volume purchases paid to retailers by 5 types of companies. METHODS: Merchants were interviewed at 108 randomly selected small retail outlets that sell tobacco in Santa Clara County, California. RESULTS: Significantly more retailers reported receiving slotting/display allowances for tobacco (62.4%) than for any other product type. An average store participating in a retailer incentive program received approximately $3157 annually from all sampled product types, of which approximately $2462 (78%) came from tobacco companies. CONCLUSIONS: Future research should assess the impact of tobacco industry incentive programs on the in-store marketing and sales practices of retailers. PMID:10511841

  11. Retail trade incentives: how tobacco industry practices compare with those of other industries.

    PubMed

    Feighery, E C; Ribisl, K M; Achabal, D D; Tyebjee, T

    1999-10-01

    This study compared the incentive payments for premium shelf space and discounts on volume purchases paid to retailers by 5 types of companies. Merchants were interviewed at 108 randomly selected small retail outlets that sell tobacco in Santa Clara County, California. Significantly more retailers reported receiving slotting/display allowances for tobacco (62.4%) than for any other product type. An average store participating in a retailer incentive program received approximately $3157 annually from all sampled product types, of which approximately $2462 (78%) came from tobacco companies. Future research should assess the impact of tobacco industry incentive programs on the in-store marketing and sales practices of retailers.

  12. Effect of Drug Sample Removal on Prescribing in a Family Practice Clinic

    PubMed Central

    Hartung, Daniel M.; Evans, David; Haxby, Dean G.; Kraemer, Dale F.; Andeen, Gabriel; Fagnan, Lyle J.

    2010-01-01

    PURPOSE Little is known about the impact of recent restrictions on pharmaceutical industry detailing and sampling on prescribing behavior, particularly within smaller, independent practices. The objective of this study was to evaluate the effect of a policy prohibiting prescription drug samples and pharmaceutical industry interaction on prescribing patterns in a rural family practice clinic in central Oregon. METHODS Segmented linear regression models were used to evaluate trends in prescribing using locally obtained pharmacy claims. Oregon Medicaid pharmacy claims were used to control for secular prescribing changes. Total and class-specific monthly trends in branded, promoted, and average prescription drug costs were analyzed 18 months before and after policy implementation. RESULTS Aggregate trends of brand name drug use did not change significantly after policy implementation. In aggregate, use of promoted agents decreased by 1.43% while nonpromoted branded agents increased by 3.04%. Branded drugs prescribed for respiratory disease declined significantly by 11.34% compared with a control group of prescribers. Relative to the control group, prescriptions of promoted cholesterol-lowering drugs and antidepressants were reduced by approximately 9.98% and 11.34%, respectively. The trend in average cost per prescription for lipid-lowering drugs was significantly reduced by $0.70 per prescription per month. Overall, average prescription drug costs increased by $5.18 immediately after policy implementation. CONCLUSIONS Restriction of pharmaceutical industry representatives and samples from a rural family practice clinic produced modest reductions in branded drug use that varied by class. Although aggregate average costs increased, prescriptions for branded and promoted lipid-lowering agents and antidepressants were reduced. PMID:20843881

  13. Effect of drug sample removal on prescribing in a family practice clinic.

    PubMed

    Hartung, Daniel M; Evans, David; Haxby, Dean G; Kraemer, Dale F; Andeen, Gabriel; Fagnan, Lyle J

    2010-01-01

    Little is known about the impact of recent restrictions on pharmaceutical industry detailing and sampling on prescribing behavior, particularly within smaller, independent practices. The objective of this study was to evaluate the effect of a policy prohibiting prescription drug samples and pharmaceutical industry interaction on prescribing patterns in a rural family practice clinic in central Oregon. Segmented linear regression models were used to evaluate trends in prescribing using locally obtained pharmacy claims. Oregon Medicaid pharmacy claims were used to control for secular prescribing changes. Total and class-specific monthly trends in branded, promoted, and average prescription drug costs were analyzed 18 months before and after policy implementation. Aggregate trends of brand name drug use did not change significantly after policy implementation. In aggregate, use of promoted agents decreased by 1.43% while nonpromoted branded agents increased by 3.04%. Branded drugs prescribed for respiratory disease declined significantly by 11.34% compared with a control group of prescribers. Relative to the control group, prescriptions of promoted cholesterol-lowering drugs and antidepressants were reduced by approximately 9.98% and 11.34%, respectively. The trend in average cost per prescription for lipid-lowering drugs was significantly reduced by $0.70 per prescription per month. Overall, average prescription drug costs increased by $5.18 immediately after policy implementation. Restriction of pharmaceutical industry representatives and samples from a rural family practice clinic produced modest reductions in branded drug use that varied by class. Although aggregate average costs increased, prescriptions for branded and promoted lipid-lowering agents and antidepressants were reduced.

  14. Improved photostability of NREL-developed EVA pottant formulations for PV module encapsulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pern, F.J.; Glick, S.H.

    1997-12-31

    Several new formulations of ethylene vinyl acetate (EVA)-based encapsulant have been developed at NREL and have greatly improved photostability against UV-induced discoloration. The new EVA formulations use stabilizers and a curing agent entirely different from any of those used in existing formulations known to the authors. No discoloration was observed for the laminated and cured samples that were exposed to a {approximately}5-sun UV light (300--400 nm) from a solar simulator at a black panel temperature (BPT) of 44 {+-} 2 C for {approximately}3250 h followed by at 85 C for {approximately}850 h, an equivalent of approximately 9.4 years for anmore » average 6-h daily, 1-sun solar exposure in Golden, Colorado. Under the same conditions, substantial discoloration and premature delamination were observed for two commercial EVA formulations. Encapsulation with the new EVA formulations should extend the long-term stability for PV modules in the field, especially when coupled with UV-filtering, Ce-containing glass superstrates.« less

  15. Preliminary assessment of facial soft tissue thickness utilizing three-dimensional computed tomography models of living individuals.

    PubMed

    Parks, Connie L; Richard, Adam H; Monson, Keith L

    2014-04-01

    Facial approximation is the technique of developing a representation of the face from the skull of an unknown individual. Facial approximation relies heavily on average craniofacial soft tissue depths. For more than a century, researchers have employed a broad array of tissue depth collection methodologies, a practice which has resulted in a lack of standardization in craniofacial soft tissue depth research. To combat such methodological inconsistencies, Stephan and Simpson 2008 [15] examined and synthesized a large number of previously published soft tissue depth studies. Their comprehensive meta-analysis produced a pooled dataset of averaged tissue depths and a simplified methodology, which the researchers suggest be utilized as a minimum standard protocol for future craniofacial soft tissue depth research. The authors of the present paper collected craniofacial soft tissue depths using three-dimensional models generated from computed tomography scans of living males and females of four self-identified ancestry groups from the United States ranging in age from 18 to 62 years. This paper assesses the differences between: (i) the pooled mean tissue depth values from the sample utilized in this paper and those published by Stephan 2012 [21] and (ii) the mean tissue depth values of two demographically similar subsets of the sample utilized in this paper and those published by Rhine and Moore 1984 [16]. Statistical test results indicate that the tissue depths collected from the sample evaluated in this paper are significantly and consistently larger than those published by Stephan 2012 [21]. Although a lack of published variance data by Rhine and Moore 1984 [16] precluded a direct statistical assessment, a substantive difference was also concluded. Further, the dataset presented in this study is representative of modern American adults and is, therefore, appropriate for use in constructing contemporary facial approximations. Published by Elsevier Ireland Ltd.

  16. Workers on the margin: who drops health coverage when prices rise?

    PubMed

    Okeke, Edward N; Hirth, Richard A; Grazier, Kyle

    2010-01-01

    We revisit the question of price elasticity of employer-sponsored insurance (ESI) take-up by directly examining changes in the take-up of ESI at a large firm in response to exogenous changes in employee premium contributions. We find that, on average, a 10% increase in the employee's out-of-pocket premium increases the probability of dropping coverage by approximately 1%. More importantly, we find heterogeneous impacts: married workers are much more price-sensitive than single employees, and lower-paid workers are disproportionately more likely to drop coverage than higher-paid workers. Elasticity estimates for employees below the 25th percentile of salary distribution in our sample are nearly twice the average.

  17. Adaptive Noise Suppression Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Kozel, David; Nelson, Richard

    1996-01-01

    A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.

  18. Accumulation of polycyclic aromatic hydrocarbons by Neocalanus copepods in Port Valdez, Alaska.

    PubMed

    Carls, Mark G; Short, Jeffrey W; Payne, James

    2006-11-01

    Sampling zooplankton is a useful strategy for observing trace hydrocarbon concentrations in water because samples represent an integrated average over a considerable effective sampling volume and are more representative of the sampled environment than discretely collected water samples. We demonstrate this method in Port Valdez, Alaska, an approximately 100 km(2) basin that receives about 0.5-2.4 kg of polynuclear aromatic hydrocarbons (PAH) per day. Total PAH (TPAH) concentrations (0.61-1.31 microg/g dry weight), composition, and spatial distributions in a lipid-rich copepod, Neocalanus were consistent with the discharge as the source of contamination. Although Neocalanus acquire PAH from water or suspended particulate matter, total PAH concentrations in these compartments were at or below method detection limits, demonstrating plankton can amplify trace concentrations to detectable levels useful for study.

  19. Comparison of daily and weekly precipitation sampling efficiencies using automatic collectors

    USGS Publications Warehouse

    Schroder, L.J.; Linthurst, R.A.; Ellson, J.E.; Vozzo, S.F.

    1985-01-01

    Precipitation samples were collected for approximately 90 daily and 50 weekly sampling periods at Finley Farm, near Raleigh, North Carolina from August 1981 through October 1982. Ten wet-deposition samplers (AEROCHEM METRICS MODEL 301) were used; 4 samplers were operated for daily sampling, and 6 samplers were operated for weekly-sampling periods. This design was used to determine if: (1) collection efficiences of precipitation are affected by small distances between the Universal (Belfort) precipitation gage and collector; (2) measurable evaporation loss occurs and (3) pH and specific conductance of precipitation vary significantly within small distances. Average collection efficiencies were 97% for weekly sampling periods compared with the rain gage. Collection efficiencies were examined by seasons and precipitation volume. Neither factor significantly affected collection efficiency. No evaporation loss was found by comparing daily sampling to weekly sampling at the collection site, which was classified as a subtropical climate. Correlation coefficients for pH and specific conductance of daily samples and weekly samples ranged from 0.83 to 0.99.Precipitation samples were collected for approximately 90 daily and 50 weekly sampling periods at Finley farm, near Raleigh, North Carolina from August 1981 through October 1982. Ten wet-deposition samplers were used; 4 samplers were operated for daily sampling, and 6 samplers were operated for weekly-sampling periods. This design was used to determine if: (1) collection efficiencies of precipitation are affected by small distances between the University (Belfort) precipitation gage and collector; (2) measurable evaporation loss occurs and (3) pH and specific conductance of precipitation vary significantly within small distances.

  20. Rapid determination of tafenoquine in small volume human plasma samples by high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Doyle, E; Fowles, S E; Summerfield, S; White, T J

    2002-03-25

    A method was developed for the determination of tafenoquine (I) in human plasma using high-performance liquid chromatography-tandem mass spectrometry. Prior to analysis, the protein in plasma samples was precipitated with methanol containing [2H3(15N)]tafenoquine (II) to act as an internal standard. The supernatant was injected onto a Genesis-C18 column without any further clean-up. The mass spectrometer was operated in the positive ion mode, employing a heat assisted nebulisation, electrospray interface. Ions were detected in multiple reaction monitoring mode. The assay required 50 microl of plasma and was precise and accurate within the range 2 to 500 ng/ml. The average within-run and between-run relative standard deviations were < 7% at 2 ng/ml and greater concentrations. The average accuracy of validation standards was generally within +/- 4% of the nominal concentration. There was no evidence of instability of I in human plasma following three complete freeze-thaw cycles and samples can safely be stored for at least 8 months at approximately -70 degrees C. The method was very robust and has been successfully applied to the analysis of clinical samples from patients and healthy volunteers dosed with I.

  1. Time Averaged Transmitter Power and Exposure to Electromagnetic Fields from Mobile Phone Base Stations

    PubMed Central

    Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo

    2014-01-01

    Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ≈ 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551

  2. Resuspension of soil as a source of airborne lead near industrial facilities and highways.

    PubMed

    Young, Thomas M; Heeraman, Deo A; Sirin, Gorkem; Ashbaugh, Lowell L

    2002-06-01

    Geologic materials are an important source of airborne particulate matter less than 10 microm aerodynamic diameter (PM10), but the contribution of contaminated soil to concentrations of Pb and other trace elements in air has not been documented. To examine the potential significance of this mechanism, surface soil samples with a range of bulk soil Pb concentrations were obtained near five industrial facilities and along roadsides and were resuspended in a specially designed laboratory chamber. The concentration of Pb and other trace elements was measured in the bulk soil, in soil size fractions, and in PM10 generated during resuspension of soils and fractions. Average yields of PM10 from dry soils ranged from 0.169 to 0.869 mg of PM10/g of soil. Yields declined approximately linearly with increasing geometric mean particle size of the bulk soil. The resulting PM10 had average Pb concentrations as high as 2283 mg/kg for samples from a secondary Pb smelter. Pb was enriched in PM10 by 5.36-88.7 times as compared with uncontaminated California soils. Total production of PM10 bound Pb from the soil samples varied between 0.012 and 1.2 mg of Pb/kg of bulk soil. During a relatively large erosion event, a contaminated site might contribute approximately 300 ng/m3 of PM10-bound Pb to air. Contribution of soil from contaminated sites to airborne element balances thus deserves consideration when constructing receptor models for source apportionment or attempting to control airborne Pb emissions.

  3. Spatial and Temporal Variability in Sediment P Distribution and Speciation in Coastal Louisiana

    NASA Astrophysics Data System (ADS)

    Bowes, K.; White, J. R.; Maiti, K.

    2017-12-01

    Excess loading of phosphorus (P) and nitrogen (N) into aquatic systems leads to degradation of water quality and diminished important ecosystem services. In the Northern Gulf of Mexico (NGOM), excess P and N loading has led to a seasonally present hypoxic area with less than 2 mg/L O2 in bottom waters, approximating 26,000 km2 in 2017. A sequential extraction (SEDEX) method was performed on surficial sediments from five different coastal and shelf sites as a function of distance from the Mississippi River mouth in the NGOM. To better quantify temporal variability in P distribution and speciation, samples were collected during both low (August) and high (May) river flow regimes. Sequential extraction techniques have been successful in separating pools of P into exchangeable or loosely sorbed P, Fe-P, Authigenic-P, Detrital-P, and Organic-P. Preliminary results suggest Authigenic-P is approximately 3-6 times more concentrated in NGOM sediments than all other P pools. Fractionation results did not show a consistent trend with sediment depth. Sediment samples had an average moisture content of 58.72% ± 12.06% and an average bulk density of 0.582 ± 0.275 g/cm3. Continued analysis of P speciation and cycling in NGOM sediments is critical in understanding the driving force behind coastal eutrophication and informing effective nutrient management strategies.

  4. Estimation of absorbed radiation dose rates in wild rodents inhabiting a site severely contaminated by the Fukushima Dai-ichi nuclear power plant accident.

    PubMed

    Kubota, Yoshihisa; Takahashi, Hiroyuki; Watanabe, Yoshito; Fuma, Shoichi; Kawaguchi, Isao; Aoki, Masanari; Kubota, Masahide; Furuhata, Yoshiaki; Shigemura, Yusaku; Yamada, Fumio; Ishikawa, Takahiro; Obara, Satoshi; Yoshida, Satoshi

    2015-04-01

    The dose rates of radiation absorbed by wild rodents inhabiting a site severely contaminated by the Fukushima Dai-ichi Nuclear Power Plant accident were estimated. The large Japanese field mouse (Apodemus speciosus), also called the wood mouse, was the major rodent species captured in the sampling area, although other species of rodents, such as small field mice (Apodemus argenteus) and Japanese grass voles (Microtus montebelli), were also collected. The external exposure of rodents calculated from the activity concentrations of radiocesium ((134)Cs and (137)Cs) in litter and soil samples using the ERICA (Environmental Risk from Ionizing Contaminants: Assessment and Management) tool under the assumption that radionuclides existed as the infinite plane isotropic source was almost the same as those measured directly with glass dosimeters embedded in rodent abdomens. Our findings suggest that the ERICA tool is useful for estimating external dose rates to small animals inhabiting forest floors; however, the estimated dose rates showed large standard deviations. This could be an indication of the inhomogeneous distribution of radionuclides in the sampled litter and soil. There was a 50-fold difference between minimum and maximum whole-body activity concentrations measured in rodents at the time of capture. The radionuclides retained in rodents after capture decreased exponentially over time. Regression equations indicated that the biological half-life of radiocesium after capture was 3.31 d. At the time of capture, the lowest activity concentration was measured in the lung and was approximately half of the highest concentration measured in the mixture of muscle and bone. The average internal absorbed dose rate was markedly smaller than the average external dose rate (<10% of the total absorbed dose rate). The average total absorbed dose rate to wild rodents inhabiting the sampling area was estimated to be approximately 52 μGy h(-1) (1.2 mGy d(-1)), even 3 years after the accident. This dose rate exceeds 0.1-1 mGy d(-1) derived consideration reference level for Reference rat proposed by the International Commission on Radiological Protection (ICRP). Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Chemical analysis of aerosol in the Venusian cloud layer by reaction gas chromatography on board the Vega landers

    NASA Technical Reports Server (NTRS)

    Gelman, B. G.; Drozdov, Y. V.; Melnikov, V. V.; Rotin, V. A.; Khokhlov, V. N.; Bondarev, V. B.; Dolnikov, G. G.; Dyachkov, A. V.; Nenarokov, D. F.; Mukhin, L. M.

    1986-01-01

    The experiment on sulfuric acid aerosol determination in the Venusian cloud layer on board the Vega landers is described. An average content of sulfuric acid of approximately 1 mg/cu m was found for the samples taken from the atmosphere at heights from 63 to 48 km and analyzed with the SIGMA-3 chromatograph. Sulfur dioxide (SO2) was revealed in the gaseous sample at the height of 48 km. From the experimental results and blank run measurements, a suggestion is made that the Venusian cloud layer aerosol consists of more complicated particles than the sulfuric acid water solution does.

  6. VizieR Online Data Catalog: Jet kinematics of blazars at 43GHz with the VLBA (Jorstad+, 2017)

    NASA Astrophysics Data System (ADS)

    Jorstad, S. G.; Marscher, A. P.; Morozova, D. A.; Troitsky, I. S.; Agudo, I.; Casadio, C.; Foord, A.; Gomez, J. L.; MacDonald, N. R.; Molina, S. N.; Lahteenmaki, A.; Tammi, J.; Tornikoski, M.

    2018-04-01

    The VLBA-BU-BLAZAR monitoring program consists of approximately monthly observations with the VLBA at 43GHz of a sample of AGNs detected as γ-ray sources. In this paper, we present the results of observations from 2007 June to 2013 January. The sample consists of 21 FSRQs, 12 BLLacs, and 3 radio galaxies (RGs). It includes the blazars and radio galaxies detected at γ-ray energies by EGRET with average flux density at 43GHz exceeding 0.5Jy, declination north of -30°, and optical magnitude in the R band brighter than 18.5. (5 data files).

  7. Kernel Wiener filter and its application to pattern recognition.

    PubMed

    Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko

    2010-11-01

    The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.

  8. On the siting of gases shock-emplaced from internal cavities in basalt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiens, R.C.

    1988-12-01

    Noble gases were extracted by stepped combustion and crushing from basalts which contained gas-filled cavities of controlled sizes prior to shock at 40 GPa. Analysis of fractions enriched and depleted in shock glass from a single sample gave a factor of 2 higher gas abundances in the glass-rich separate. Release patterns were nearly identical, suggesting similar siting (in glass) in both fractions. Crushing of a sample released {approximately}45% of implanted noble gases, but only {approximately}17% of N{sub 2}, indicating that most or all of the noble gas was trapped in vesicles. Analysis by SEM/EDS confirmed the presence of vesicles inmore » glassy areas, with an average diameter of {approximately}10 {mu}m. Samples with relatively large pre-shock cavities were found to consist of up to 70-80% glass locally and generally exhibit greater local shock effects than solid and densely-packed particulate targets at the same shock pressure, though the latter give higher glass emplacement efficiencies. The petrographic results indicate that in situ production of glassy pockets grossly similar to those in the shergottite EETA 79001 is possible from shock reverberations in the vicinity of a vug. However, the siting of the gases points to a more complex scenario, in which SPB gas and melt material were probably injected into EETA 79001.« less

  9. Elemental composition of solar energetic particles

    NASA Technical Reports Server (NTRS)

    Cook, W. R.; Stone, E. C.; Vogt, R. E.

    1984-01-01

    The Low Energy Telescopes on the Voyager spacecraft have been used to measure the elemental composition (Z = 2-28) and energy spectra (5-15 MeV per nucleon) of solar energetic particles (SEPs) in seven large flare events. Four flare events were selected which have SEP abundance ratios approximately independent of energy per nucleon. For these selected flare events, SEP composition results may be described by an average composition plus a systematic flare-to-flare deviation about the average. The four-flare average SEP composition is systematically different from the solar composition determined by photospheric spectroscopy. These systematic composition differences are apparently not due to SEP propagation or acceleration effects. In contrast, the four-flare average SEP composition is in agreement with measured solar wind abundances and with a number of recent spectroscopic coronal abundance measurements. These findings suggest that SEPs originate in the corona, and that both SEPs and the solar wind sample a coronal composition which is significantly and persistently different from that measured for the photosphere.

  10. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    PubMed

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  11. Gross Motor Skills and Cardiometabolic Risk in Children: A Mediation Analysis.

    PubMed

    Burns, Ryan D; Brusseau, Timothy A; Fu, You; Hannon, James C

    2017-04-01

    The purpose of this study was to examine the linear relationship between gross motor skills and cardiometabolic risk, with aerobic fitness as a mediator variable, in low-income children from the United States. Participants were a convenience sample of 224 children (mean ± SD age = 9.1 ± 1.1 yr; 129 girls and 95 boys) recruited from five low-income elementary schools from the Mountain West Region of the United States. Gross motor skills were assessed using the Test for Gross Motor Development, 3rd Edition. Gross motor skills were analyzed using a locomotor skill, a ball skill, and a total gross motor skill score. Aerobic fitness was assessed using the Progressive Aerobic Cardiovascular Endurance Run that was administered during physical education class. A continuous and age- and sex-adjusted metabolic syndrome score (MetS) was calculated from health and blood marker measurements collected in a fasted state before school hours. Total effects, average direct effects, and indirect effects (average causal mediation effect) were calculated using a bootstrap mediation analysis method via a linear regression algorithm. The average causal mediation effect of gross locomotor skills on MetS scores, using aerobic fitness as the mediator variable, was statistically significant (β = -0.055, 95% confidence interval = -0.097 to -0.021, P = 0.003). The model explained approximately 17.5% of the total variance in MetS with approximately 43.7% of the relationship between locomotor skills and MetS mediated through aerobic fitness. Ball skills did not significantly relate with cardiometabolic risk. There is a significant relationship between gross locomotor skills and cardiometabolic risk that is partially mediated through aerobic fitness in a sample of low-income children from the United States.

  12. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  13. Measurements of an Anomalous Global Methane Increase During 1998

    NASA Technical Reports Server (NTRS)

    Dlugokencky, E. J.; Walter, B. P.; Masarie, K. A.; Lang, P. M.; Kasischke, E. S.; Hansen, James E. (Technical Monitor)

    2001-01-01

    Measurements of atmospheric methane from a globally distributed network of air sampling sites indicate that the globally averaged CH4 growth rate increased from an average of 3.9 ppb/yr during 1995-1997 to 12.7 +/- 0.6 ppb in 1998. The global growth rate then decreased to 2.6 +/- 0.6 ppb during 1999, indicating that the large increase in 1998 was an anomaly and not a return to the larger growth rates observed during the late 1970s and early 1980s. The increased growth rate represents an anomalous increase in the imbalance between CH4 sources and sinks equal to approximately 24 Tg CH4 during 1998. Wetlands and boreal biomass burning are sources that may have contributed to the anomaly. During 1998, the globally averaged temperature anomaly was +0.67 C, the largest temperature anomaly in the modern record. A regression model based on temperature and precipitation anomalies was used to calculate emission anomalies of 11.6 Tg CH4 from wetlands north of 30 N and 13 Tg CH4 for tropical wetlands during 1998 compared to average emissions calculated for 1982-1993. In 1999, calculated wetland emission anomalies were negative for high northern latitudes and the tropics, contributing to the low growth rate observed in 1999. Also 1998 was a severe fire year in boreal regions where approximately 1.3x10(exp 5) sq km of forest and peat land burned releasing an estimated 5.7 Tg CH4

  14. Jello Shot Consumption among Underage Youths in the United States

    PubMed Central

    SIEGEL, MICHAEL; GALLOWAY, ASHLEY; ROSS, CRAIG S.; BINAKONSKY, JANE; JERNIGAN, DAVID H.

    2015-01-01

    We sought, for the first time, to identify the extent of jello shot consumption among underage youth. We conducted a study among a national sample of 1,031 youth, aged 13 to 20, using a pre-recruited internet panel maintained by GfK Knowledge Networks to assess past 30-day consumption of jello shots. Nearly one-fifth of underage youth have consumed jello shots in the past 30 days and jello shots make up an average of nearly 20% of their overall alcohol intake. Jello shot users in our sample were approximately 1.5 times more likely to binge drink, consumed approximately 1.6 times as many drinks per month, and were 1.7 times more likely to have been in a physical fight related to their alcohol use as drinkers in general. Ascertainment of jello shot use should become a standard part of youth alcohol surveillance and states should consider banning the sale of these products. PMID:27087771

  15. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  16. Determination of mercury in an assortment of dietary supplements using an inexpensive combustion atomic absorption spectrometry technique.

    PubMed

    Levine, Keith E; Levine, Michael A; Weber, Frank X; Hu, Ye; Perlmutter, Jason; Grohse, Peter M

    2005-01-01

    The concentrations of mercury in forty, commercially available dietary supplements, were determined using a new, inexpensive analysis technique. The method involves thermal decomposition, amalgamation, and detection of mercury by atomic absorption spectrometry with an analysis time of approximately six minutes per sample. The primary cost savings from this approach is that labor-intensive sample digestion is not required prior to analysis, further automating the analytical procedure. As a result, manufacturers and regulatory agencies concerned with monitoring lot-to-lot product quality may find this approach an attractive alternative to the more classical acid-decomposition, cold vapor atomic absorption methodology. Dietary supplement samples analyzed included astragalus, calcium, chromium picolinate, echinacea, ephedra, fish oil, ginger, ginkgo biloba, ginseng, goldenseal, guggul, senna, St John's wort, and yohimbe products. Quality control samples analyzed with the dietary supplements indicated a high level of method accuracy and precision. Ten replicate preparations of a standard reference material (NIST 1573a, tomato leaves) were analyzed, and the average mercury recovery was 109% (2.0% RSD). The method quantitation limit was 0.3 ng, which corresponded to 1.5 ng/g sample. The highest found mercury concentration (123 ng/g) was measured in a concentrated salmon oil sample. When taken as directed by an adult, this product would result in an approximate mercury ingestion of 7 mug per week.

  17. A Tracer Test at the Los Alamos Canyon Weir

    NASA Astrophysics Data System (ADS)

    Levitt, D. G.; Stone, W. J.; Newell, D. L.; Wykoff, D. S.

    2002-12-01

    A low-head weir was constructed in the Los Alamos Canyon to reduce the transport of contaminant-bearing sediment caused by fire-enhanced runoff off Los Alamos National Laboratory (LANL) property towards the Rio Grande following the May 2000 Cerro Grande fire at Los Alamos, New Mexico. Fractured basalt was exposed in the channel by grading during construction of the weir, and water temporarily ponds behind the weir following periods of runoff. In order to monitor any downward transport of contaminants into fractured basalt, and potentially downward to the regional ground water, three boreholes (one vertical, one at 43 degrees, and one at 34 degrees from horizontal) were installed for environmental monitoring. The boreholes penetrate to depths ranging from approximately 9 to 82 m below the weir floor. The two angled boreholes are fitted with flexible FLUTe liners with resistance sensors to measure relative moisture content and absorbent sampling pads for contaminant and environmental tracer sampling within the vadose zone. The two angled boreholes are also monitored for relative changes in moisture content by neutron logging. The vertical borehole penetrates three perched water zones and is equipped with four screens and sampling ports. In April 2002, a tracer test was initiated with the application of a 0.2 M (16,000 ppm) solution of potassium bromide (KBr) onto the weir floor. The tracer experiment was intended to provide data on travel times through the complex hydrogeologic media of fractured basalt. A precipitation and runoff event in June 2002 resulted in approximately 0.61 m of standing water behind the weir. If the KBr and flood waters were well mixed, the concentration of KBr in the flood waters was approximately 24 ppm. Bromide was detected in the absorbent membrane in the 43 degree hole at concentrations up to 2 ppm. Resistance sensors in the 43 degree borehole detected moisture increases within 3 days at a depth of 27 m, indicating an average wetting front velocity of 8.9 m per day in the vadose zone. Increases in bromide concentrations were detected in water samples from two of the four sampling ports in the vertical well within 10 days of the precipitation event, indicating an average wetting front velocity of 5.5 m per day to the sample port at a depth of 55 m below the weir floor. Increases in bromide concentrations were detected at the bottom port of the vertical well at a depth of 78 m below the weir floor within 21 days, indicating an average wetting front velocity of 3.7 m per day. Modeling of this tracer test data will improve our understanding of: the impact of the fire on ground-water quality; the impact of the weir on ground-water quality; surface water/ground water interactions; and the hydraulic properties of the Cerros del Rio basalts underlying the eastern Pajarito Plateau.

  18. Neutral degradates of chloroacetamide herbicides: occurrence in drinking water and removal during conventional water treatment.

    PubMed

    Hladik, Michelle L; Bouwer, Edward J; Roberts, A Lynn

    2008-12-01

    Treated drinking water samples from 12 water utilities in the Midwestern United States were collected during Fall 2003 and Spring 2004 and were analyzed for selected neutral degradates of chloroacetamide herbicides, along with related compounds. Target analytes included 20 neutral chloroacetamide degradates, six ionic chloroacetamide degradates, four parent chloroacetamide herbicides, three triazine herbicides, and two neutral triazine degradates. In the fall samples, 17 of 20 neutral chloroacetamide degradates were detected in the finished drinking water, while 19 of 20 neutral chloroacetamide degradates were detected in the spring. Median concentrations for the neutral chloroacetamide degradates were approximately 2-60ng/L during both sampling periods. Concentrations measured in the fall samples of treated water were nearly the same as those measured in source waters, despite the variety of treatment trains employed. Significant removals (average of 40% for all compounds) were only found in the spring samples at those utilities that employed activated carbon.

  19. Investigation of ultrashort-pulsed laser on dental hard tissue

    NASA Astrophysics Data System (ADS)

    Uchizono, Takeyuki; Awazu, Kunio; Igarashi, Akihiro; Kato, Junji; Hirai, Yoshito

    2007-02-01

    Ultrashort-pulsed laser (USPL) can ablate various materials with precious less thermal effect. In laser dentistry, to solve the problem that were the generation of crack and carbonized layer by irradiating with conventional laser such as Er:YAG and CO II laser, USPL has been studied to ablate dental hard tissues by several researchers. We investigated the effectiveness of ablation on dental hard tissues by USPL. In this study, Ti:sapphire laser as USPL was used. The laser parameter had the pulse duration of 130 fsec, 800nm wavelength, 1KHz of repetition rate and the average power density of 90~360W/cm2. Bovine root dentin plates and crown enamel plates were irradiated with USPL at 1mm/sec using moving stage. The irradiated samples were analyzed by SEM, EDX, FTIR and roughness meter. In all irradiated samples, the cavity margin and wall were sharp and steep, extremely. In irradiated dentin samples, the surface showed the opened dentin tubules and no smear layer. The Ca/P ratio by EDX measurement and the optical spectrum by FTIR measurement had no change on comparison irradiated samples and non-irradiated samples. These results confirmed that USPL could ablate dental hard tissue, precisely and non-thermally. In addition, the ablation depths of samples were 10μm, 20μm, and 60μm at 90 W/cm2, 180 W/cm2, and 360 W/cm2, approximately. Therefore, ablation depth by USPL depends on the average power density. USPL has the possibility that can control the precision and non-thermal ablation with depth direction by adjusting the irradiated average power density.

  20. Nonpoint Pollution Discharge Permit Testing and Control Strategies at Naval Air Station Whidbey Island.

    DTIC Science & Technology

    1991-01-01

    the permit. Monthly maximum and average test results are submitted to the USEPA with an approximation of the weekly flow rate . The quantity of flow is...flow rate . The storm flow data and drainage system hydraulic capacity are being reviewed by Sajan. Inc., Seattle. Figure 2. Visible Soil Staining at...approach is to collect composite samples of the flow, which will reduce fluctuations and allow a more accu- rate determination of total loadings with

  1. Long-range pulselength scaling of 351nm laser damage thresholds

    NASA Astrophysics Data System (ADS)

    Foltyn, S. R.; Jolin, L. J.

    1986-12-01

    In a series of experiments incorporating 351nm pulselength of 9, 26, 54, and 625ns, it was found that laser damage thresholds increased as (pulselength)/sup x/, and that the exponent averaged 0.36 and ranged, for different samples, from 0.23 to 0.48. Similar results were obtained when only catastrophic damage was considered. Samples included Al2O3/SiO2 in both AR and HR multilayers, HR's of Sc2O3/SiO2 and HfO2/SiO2, and Al-on-pyrex mirror; 9ns thresholds were between 0.2 to 5.6 J/sq cm. When these data were compared with a wide range of other results - for wavelengths from 0.25 to 10.6 microns and pulselengths down to 4ps - a remarkably consistent picture emerged. Damage thresholds, on average, increase approximately as the cube-root of pulselength from picoseconds to nearly a microsecond, and do so regardless of wavelength or material under test.

  2. The independent roles of cardiorespiratory fitness and sedentary time on chronic conditions and Body Mass Index in older adults.

    PubMed

    Stathokostas, L; Dogra, S; Paterson, D H

    2015-10-01

    The aim of this paper was to examine the independent influence of cardiorespiratory fitness and sedentary behavior on chronic disease incidence and body composition in older adults. A sample of 292 community dwelling men and women (mean 69.3±8.1 years) underwent maximal treadmill testing and completed questionnaires relating to their leisure-time physical activity, sedentary time, and health. The average V O2 of the sample was approximately 21 ml.kg(-1).min(-1) with the average sedentary time being over 3 hours per day. Cardiorespiratory fitness was found to be a stronger predictor of number of chronic conditions and BMI than total physical activity and sedentary. Those with a higher cardiorespiratory fitness had fewer chronic conditions and a lower BMI. No such associations were seen for either total physical activity levels or sedentary time. Cardiorespiratory fitness is a stronger predictor of health among older adults and further highlights the importance of promoting public health guidelines for cardiorespiratory fitness.

  3. Highly transparent Tb3Al5O12 magneto-optical ceramics sintered from co-precipitated powders with sintering aids

    NASA Astrophysics Data System (ADS)

    Dai, Jiawei; Pan, Yubai; Xie, Tengfei; Kou, Huamin; Li, Jiang

    2018-04-01

    Highly transparent terbium aluminum garnet (Tb3Al5O12, TAG) magneto-optical ceramics were fabricated from co-precipitated nanopowders with tetraethoxysilane (TEOS) as sintering aid by vacuum sintering combined with hot isostatic pressing (HIP) post-treatment. The ball milled TAG powder shows better dispersity than the as-synthesized powder, and its average particle size is about 80 nm. For the ceramic sample pre-sintered at 1720 °C for 20 h with HIP post-treated at 1700 °C for 3 h, the in-line transmittance exceeds 76% in the region of 400-1580nm (except the absorption band), reaching a maximum value of 81.8% at the wavelength of 1390 nm. The microstructure of the TAG ceramic is homogeneous and its average grain size is approximately 19.7 μm. The Verdet constant of the sample is calculated to be -182.7 rad·T-1·m-1 at room temperature.

  4. Preliminary report on a population that received a heavy exposure to methyl mercury

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarkson, T.W.; Smith, J.C.; Bakir, F.

    1973-01-01

    An epidemic of methyl mercury poisoning due to the consumption of homemade bread prepared from wheat treated with a methyl mercury fungicide occurred in Iraq in the winter of 1971-1972, with 6530 cases admitted to hospitals. Four hundred and fifty nine died in hosptials. Observations on 16 patients over a period of 60 days indicated a median clearance half-time from blood of approximately 70 days. Concentrations of total mercury in milk averaged 5% of the mercury in simultaneously collected samples of whole blood. Concentrations of total mercury in urine samples did not correlate with concentrations of mercury in blood. Inorganicmore » mercury accounted for the following average percentages of total mercury: 22% in plasma, 40% in milk and 73% in urine. Studies of dose-response relationships indicated that toxic effects of methyl mercury became clinically detectable at body burdens in the range of 0.05-0.8 mg Hg/kg body weight. 8 references, 4 figures.« less

  5. Experimental Analysis of Heat Transfer Characteristics and Pressure Drop through Screen Regenerative Heat Exchangers

    DTIC Science & Technology

    1993-12-01

    of fluid T1 initial temperature of matrix and fluid Tf1 average inlet temperature after the step change Tii average inlet temperature before the step...respectively, of the regenerator. The horizontal distances shown with Tf1 , Tj, and T,2 illustrate the time interval for which the average values were...temperature was not a true step function, the investigator made an approximation. The approximation was based on an average temperature. Tf1 was the

  6. Preliminary report on part of the Oat Hill quicksilver mine, Mayacmas district, Napa County, California

    USGS Publications Warehouse

    Fix, Philip Forsyth

    1955-01-01

    Oat Hill quicksilver mine, located in the Mayacmas district of northern California, and credited with having produced more than 160,000 flasks of quicksilver, was sampled cooperatively by the Buray of Mines and Geological Survey during 1944. 28 diamond drill holes totaling 8,120 feet were drilled by the Bureau of Mines in four of the six principal veins to sample virgin low-grade reserves and stope fill, and reserves in the other two veins were estimated from existing underground workings and by inferences from drill holes in nearby veins. The writer estimates a total of 10,220 flasks of quicksilver in indicated and inferred reserves totaling 320,000 tons. Indicated reserves minable under 1943 conditions are estimated at 1,960 flasks of quicksilver in 75,000 tons averaging 3.0 lbs Hg per ton. Inferred reserves minable under 1943 conditions are estimated at 4,640 flasks of quicksilver in 109,920 tons averaging about 3.2 lbs Hg per ton. Inferred reserves believed minable only under economic conditions much more favorable than even those of 1943 are estimated at 2,620 flasks of quicksilver in 135,080 tons averaging a little less than 1.5 lbs Hg per ton. About two-thirds of the indicated reserves are accessible in underground workings. All other reserves are estimated approximately without access underground. Several areas not sampled may possibly contain reserves.

  7. Assessment of airborne asbestos exposure during the servicing and handling of automobile asbestos-containing gaskets.

    PubMed

    Blake, Charles L; Dotson, G Scott; Harbison, Raymond D

    2006-07-01

    Five test sessions were conducted to assess asbestos exposure during the removal or installation of asbestos-containing gaskets on vehicles. All testing took place within an operative automotive repair facility involving passenger cars and a pickup truck ranging in vintage from late 1960s through 1970s. A professional mechanic performed all shop work including engine disassembly and reassembly, gasket manipulation and parts cleaning. Bulk sample analysis of removed gaskets through polarized light microscopy (PLM) revealed asbestos fiber concentrations ranging between 0 and 75%. Personal and area air samples were collected and analyzed using National Institute of Occupational Safety Health (NIOSH) methods 7400 [phase contrast microscopy (PCM)] and 7402 [transmission electron microscopy (TEM)]. Among all air samples collected, approximately 21% (n = 11) contained chrysotile fibers. The mean PCM and phase contrast microscopy equivalent (PCME) 8-h time weighted average (TWA) concentrations for these samples were 0.0031 fibers/cubic centimeters (f/cc) and 0.0017 f/cc, respectively. Based on these findings, automobile mechanics who worked with asbestos-containing gaskets may have been exposed to concentrations of airborne asbestos concentrations approximately 100 times lower than the current Occupational Safety and Health Administration (OSHA) Permissible Exposure Limit (PEL) of 0.1 f/cc.

  8. The global topography of Bennu: altimetry, photoclinometry, and processing

    NASA Astrophysics Data System (ADS)

    Perry, M. E.; Barnouin, O. S.; Daly, M. G.; Seabrook, J.; Palmer, E. E.; Gaskell, R. W.; Craft, K. L.; Roberts, J. H.; Philpott, L.; Asad, M. Al; Johnson, C. L.; Nair, A. H.; Espiritu, R. C.; Nolan, M. C.; Lauretta, D. S.

    2017-09-01

    The Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer (OSIRIS-REx) mission will spend two years observing (101955) Bennu and will then return pristine samples of carbonaceous material from the asteroid [1]. Launched in September 2016, OSIRISREx arrives at Bennu in August 2018, acquires a sample in July 2020, and returns the sample to Earth in September 2023. The instruments onboard OSIRIS-REx will measure the physical and chemical properties of this B-class asteroid, a subclass within the larger group of C-complex asteroids that might be organic-rich. At approximately 500m in average diameter [2], Bennu is sufficiently large to retain substantial regolith and as an Apollo asteroid with a low inclination (6°), it is one of the most accessible primitive near-Earth asteroid.

  9. Aluminum and Phthalates in Calcium Gluconate: Contribution From Glass and Plastic Packaging.

    PubMed

    Yokel, Robert A; Unrine, Jason M

    2017-01-01

    Aluminum contamination of parenteral nutrition solutions has been documented for 3 decades. It can result in elevated blood, bone, and whole body aluminum levels associated with neurotoxicity, reduced bone mass and mineral content, and perhaps hepatotoxicity. The primary aluminum source among parenteral nutrition components is glass-packaged calcium gluconate, in which aluminum concentration in the past 3 decades has averaged approximately 4000 μg/L, compared with <200 μg/L in plastic container-packaged calcium gluconate. A concern about plastic packaging is leaching of plasticizers, including phthalates, which have the potential to cause endocrine (male reproductive system) disruption and neurotoxicity. Aluminum was quantified in samples collected periodically for more than 2 years from 3 calcium gluconate sources used to prepare parenteral nutrition solutions; 2 packaged in glass (from France and the United States) and 1 in plastic (from Germany); in a recently released plastic-packaged solution (from the United States); and in the 2 glass containers. Phthalate concentration was determined in selected samples of each product and leachate of the plastic containers. The initial aluminum concentration was approximately 5000 μg/L in the 2 glass-packaged products and approximately 20 μg/L in the plastic-packaged product, and increased approximately 30%, 50%, and 100% in 2 years, respectively. The aluminum concentration in a recently released Calcium Gluconate Injection USP was approximately 320 μg/L. Phthalates were not detected in any calcium gluconate solutions or leachates. Plastic packaging greatly reduces the contribution of aluminum to parenteral nutrition solutions from calcium gluconate compared with the glass-packaged product.

  10. Cost analysis of a patient navigation system to increase screening colonoscopy adherence among urban minorities.

    PubMed

    Jandorf, Lina; Stossel, Lauren M; Cooperman, Julia L; Graff Zivin, Joshua; Ladabaum, Uri; Hall, Diana; Thélémaque, Linda D; Redd, William; Itzkowitz, Steven H

    2013-02-01

    Patient navigation (PN) is being used increasingly to help patients complete screening colonoscopy (SC) to prevent colorectal cancer. At their large, urban academic medical center with an open-access endoscopy system, the authors previously demonstrated that PN programs produced a colonoscopy completion rate of 78.5% in a cohort of 503 patients (predominantly African Americans and Latinos with public health insurance). Very little is known about the direct costs of implementing PN programs. The objective of the current study was to perform a detailed cost analysis of PN programs at the authors' institution from an institutional perspective. In 2 randomized controlled trials, average-risk patients who were referred for SC by primary care providers were recruited for PN between May 2008 and May 2010. Patients were randomized to 1 of 4 PN groups. The cost of PN and net income to the institution were determined in a cost analysis. Among 395 patients who completed colonoscopy, 53.4% underwent SC alone, 30.1% underwent colonoscopy with biopsy, and 16.5% underwent snare polypectomy. Accounting for the average contribution margins of each procedure type, the total revenue was $95,266.00. The total cost of PN was $14,027.30. Net income was $81,238.70. In a model sample of 1000 patients, net incomes for the institutional completion rate (approximately 80%), the historic PN program (approximately 65%), and the national average (approximately 50%) were compared. The current PN program generated additional net incomes of $35,035.50 and $44,956.00, respectively. PN among minority patients with mostly public health insurance generated additional income to the institution, mainly because of increased colonoscopy completion rates. Copyright © 2012 American Cancer Society.

  11. Numbers, biomass and cultivable diversity of microbial populations relate to depth and borehole-specific conditions in groundwater from depths of 4-450 m in Olkiluoto, Finland.

    PubMed

    Pedersen, Karsten; Arlinger, Johanna; Eriksson, Sara; Hallbeck, Anna; Hallbeck, Lotta; Johansson, Jessica

    2008-07-01

    Microbiology, chemistry and dissolved gas in groundwater from Olkiluoto, Finland, were analysed over 3 years; samples came from 16 shallow observation tubes and boreholes from depths of 3.9-16.2 m and 14 deep boreholes from depths of 35-742 m. The average total number of cells (TNC) was 3.9 x 10(5) cells per ml in the shallow groundwater and 5.7 x 10(4) cells per ml in the deep groundwater. There was a significant correlation between the amount of biomass, analysed as ATP concentration, and TNC. ATP concentration also correlated with the stacked output of anaerobic most probable number cultivations of nitrate-, iron-, manganese- and sulphate-reducing bacteria, and acetogenic bacteria and methanogens. The numbers and biomass varied at most by approximately three orders of magnitude between boreholes, and TNC and ATP were positively related to the concentration of dissolved organic carbon. Two depth zones were found where the numbers, biomass and diversity of the microbial populations peaked. Shallow groundwater down to a depth of 16.2 m on average contained more biomass and cultivable microorganisms than did deep groundwater, except in a zone at a depth of approximately 300 m where the average biomass and number of cultivable microorganisms approached those of shallow groundwater. Starting at a depth of approximately 300 m, there were steep gradients of decreasing sulphate and increasing methane concentrations with depth; together with the peaks in biomass and sulphide concentration at this depth, these suggest that anaerobic methane oxidation may be a significant process at depth in Olkiluoto.

  12. Structure and texture analysis of PVC foils by neutron diffraction.

    PubMed

    Kalvoda, L; Dlouhá, M; Vratislav, S

    2010-01-01

    Crystalline order of molded and then bi-axially stretched foils prepared from atactic PVC resin is investigated by means of wide-angle neutron diffraction (WAND). The observed high-resolution WAND patterns of all samples are dominated by a sharp maximum corresponding to the inter-planar distance 0.52 nm. Two weaker maxima are also resolved at 0.62 and 0.78 nm. Intensities of the peaks vary with deformation ratios of the samples and their diffraction position. Average size of the coherently scattering domains is estimated as approximately 4-8 nm. Based on the experimental data, a novel model of crystalline order of atactic PVC is proposed. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Spacelab J air filter debris analysis

    NASA Technical Reports Server (NTRS)

    Obenhuber, Donald C.

    1993-01-01

    Filter debris from the Spacelab module SLJ of STS-49 was analyzed for microbial contamination. Debris for cabin and avionics filters was collected by Kennedy Space Center personnel on 1 Oct. 1992, approximately 5 days postflight. The concentration of microorganisms found was similar to previous Spacelab missions averaging 7.4E+4 CFU/mL for avionics filter debris and 4.5E+6 CFU/mL for the cabin filter debris. A similar diversity of bacterial types was found in the two filters. Of the 13 different bacterial types identified from the cabin and avionics samples, 6 were common to both filters. The overall analysis of these samples as compared to those of previous missions shows no significant differences.

  14. Rejection of fluorescence background in resonance and spontaneous Raman microspectroscopy.

    PubMed

    Smith, Zachary J; Knorr, Florian; Pagba, Cynthia V; Wachsmann-Hogiu, Sebastian

    2011-05-18

    Raman spectroscopy is often plagued by a strong fluorescent background, particularly for biological samples. If a sample is excited with a train of ultrafast pulses, a system that can temporally separate spectrally overlapping signals on a picosecond timescale can isolate promptly arriving Raman scattered light from late-arriving fluorescence light. Here we discuss the construction and operation of a complex nonlinear optical system that uses all-optical switching in the form of a low-power optical Kerr gate to isolate Raman and fluorescence signals. A single 808 nm laser with 2.4 W of average power and 80 MHz repetition rate is split, with approximately 200 mW of 808 nm light being converted to < 5 mW of 404 nm light sent to the sample to excite Raman scattering. The remaining unconverted 808 nm light is then sent to a nonlinear medium where it acts as the pump for the all-optical shutter. The shutter opens and closes in 800 fs with a peak efficiency of approximately 5%. Using this system we are able to successfully separate Raman and fluorescence signals at an 80 MHz repetition rate using pulse energies and average powers that remain biologically safe. Because the system has no spare capacity in terms of optical power, we detail several design and alignment considerations that aid in maximizing the throughput of the system. We also discuss our protocol for obtaining the spatial and temporal overlap of the signal and pump beams within the Kerr medium, as well as a detailed protocol for spectral acquisition. Finally, we report a few representative results of Raman spectra obtained in the presence of strong fluorescence using our time-gating system.

  15. Evaluation of a new monochloramine generation system for controlling Legionella in building hot water systems.

    PubMed

    Duda, Scott; Kandiah, Sheena; Stout, Janet E; Baron, Julianne L; Yassin, Mohamed; Fabrizio, Marie; Ferrelli, Juliet; Hariri, Rahman; Wagener, Marilyn M; Goepfert, John; Bond, James; Hannigan, Joseph; Rogers, Denzil

    2014-11-01

    To evaluate the efficacy of a new monochloramine generation system for control of Legionella in a hospital hot water distribution system. A 495-bed tertiary care hospital in Pittsburgh, Pennsylvania. The hospital has 12 floors covering approximately 78,000 m(2). The hospital hot water system was monitored for a total of 29 months, including a 5-month baseline sampling period prior to installation of the monochloramine system and 24 months of surveillance after system installation (postdisinfection period). Water samples were collected for microbiological analysis (Legionella species, Pseudomonas aeruginosa, Stenotrophomonas maltophilia, Acinetobacter species, nitrifying bacteria, heterotrophic plate count [HPC] bacteria, and nontuberculous mycobacteria). Chemical parameters monitored during the investigation included monochloramine, chlorine (free and total), nitrate, nitrite, total ammonia, copper, silver, lead, and pH. A significant reduction in Legionella distal site positivity was observed between the pre- and postdisinfection periods, with positivity decreasing from an average of 53% (baseline) to an average of 9% after monochloramine application (P<0.5]). Although geometric mean HPC concentrations decreased by approximately 2 log colony-forming units per milliliter during monochloramine treatment, we did not observe significant changes in other microbial populations. This is the first evaluation in the United States of a commercially available monochloramine system installed on a hospital hot water system for Legionella disinfection, and it demonstrated a significant reduction in Legionella colonization. Significant increases in microbial populations or other negative effects previously associated with monochloramine use in large municipal cold water systems were not observed.

  16. Impact of Drilling Operations on Lunar Volatiles Capture: Thermal Vacuum Tests

    NASA Technical Reports Server (NTRS)

    Kleinhenz, Julie E.; Paulsen, Gale; Zacny, Kris; Smith, Jim

    2015-01-01

    In Situ Resource Utilization (ISRU) enables future planetary exploration by using local resources to supply mission consumables. This idea of 'living off the land' has the potential to reduce mission cost and risk. On the moon, water has been identified as a potential resource (for life support or propellant) at the lunar poles, where it exists as ice in the subsurface. However, the depth and content of this resource has yet to be confirmed on the ground; only remote detection data exists. The upcoming Resource Prospector mission (RP) will 'ground-truth' the water using a rover, drill, and the RESOLVE science package. As the 2020 planned mission date nears, component level hardware is being tested in relevant lunar conditions (thermal vacuum). In August 2014 a series of drilling tests were performed using the Honeybee Robotics Lunar Prospecting Drill inside a 'dirty' thermal vacuum chamber at the NASA Glenn Research Center. The drill used a unique auger design to capture and retain the lunar regolith simulant. The goal of these tests was to investigate volatiles (water) loss during drilling and sample transfer to a sample crucible in order to validate this regolith sampling method. Twelve soil samples were captured over the course of two tests at pressures of 10(exp-5) Torr and ambient temperatures between -80C to -20C. Each sample was obtained from a depth of 40 cm to 50 cm within a cryogenically frozen bed of NU-LHT-3M lunar regolith simulant doped with 5 wt% water. Upon acquisition, each sample was transferred and hermetically sealed inside a crucible. The samples were later baked out to determine water wt% and in turn volatile loss by following ASTM standard practices. Of the twelve tests, four sealed properly and lost an average of 30% of their available water during drilling and transfer. The variability in the results correlated well with ambient temperature (lower the temperature lower volatiles loss) and the trend agreed with the sublimation rates for the same temperature. Moisture retention also correlated with quantity of sample: a larger amount of material resulted in less water loss. The drilling process took an average of 10 minutes to capture and transfer each sample. The drilling power was approximately 20 Watt with a Weight on Bit of approximately 30 N. The bit temperature indicated little heat input into formation during the drilling process.

  17. Detailed characterisation of the incident neutron beam on the TOSCA spectrometer

    NASA Astrophysics Data System (ADS)

    Pinna, Roberto S.; Rudić, Svemir; Capstick, Matthew J.; McPhail, David J.; Pooley, Daniel E.; Howells, Gareth D.; Gorini, Giuseppe; Fernandez-Alonso, Felix

    2017-10-01

    We report a detailed characterisation of the incident neutron beam on the TOSCA spectrometer. A bespoke time-of-flight neutron monitor has been designed, constructed and used to perform extensive spatially resolved measurements of the absolute neutron flux and its underlying time structure at the instrument sample position. The obtained data give a quantitative understanding of the current instrument beyond neutronic simulations and provide a baseline in order to assess the performance of the upgraded instrument. At an average proton current-on-target of 153 μA (ISIS Target Station 1; at the time of measurements) we have found that the wavelength-integrated neutron flux (from 0.28 Å to 4.65 Å) at the position of the TOSCA instrument sample (spatially averaged across the 3 × 3cm2 surface centred around (0,0) position) is approximately 1 . 2 × 106 neutrons cm-2s-1, while the whole beam has a homogeneous distribution across the 3 . 0 × 3 . 5cm2 sample surface. The spectra reproduced the well-known shape of the neutrons moderated by the room temperature water moderator and exhibit a neutron flux of 7 . 3 × 105 neutrons cm-2s-1Å-1 at 1 Å.

  18. Quantitation of 47 human tear proteins using high resolution multiple reaction monitoring (HR-MRM) based-mass spectrometry.

    PubMed

    Tong, Louis; Zhou, Xi Yuan; Jylha, Antti; Aapola, Ulla; Liu, Dan Ning; Koh, Siew Kwan; Tian, Dechao; Quah, Joanne; Uusitalo, Hannu; Beuerman, Roger W; Zhou, Lei

    2015-02-06

    Tear proteins are intimately related to the pathophysiology of the ocular surface. Many recent studies have demonstrated that the tear is an accessible fluid for studying eye diseases and biomarker discovery. This study describes a high resolution multiple reaction monitoring (HR-MRM) approach for developing assays for quantification of biologically important tear proteins. Human tear samples were collected from 1000 subjects with no eye complaints (411 male, 589 female, average age: 55.5±14.5years) after obtaining informed consent. Tear samples were collected using Schirmer's strips and pooled into a single global control sample. Quantification of proteins was carried out by selecting "signature" peptides derived by trypsin digestion. A 1-h nanoLC-MS/MS run was used to quantify the tear proteins in HR-MRM mode. Good reproducibility of signal intensity (using peak areas) was demonstrated for all 47 HR-MRM assays with an average coefficient of variation (CV%) of 4.82% (range: 1.52-10.30%). All assays showed consistent retention time with a CV of less than 0.80% (average: 0.57%). HR-MRM absolute quantitation of eight tear proteins was demonstrated using stable isotope-labeled peptides. In this study, we demonstrated for the first time the technique to quantify 47 human tear proteins in HR-MRM mode using approximately 1μl of human tear sample. These multiplexed HR-MRM-based assays show great promise of further development for biomarker validation in human tear samples. Both discovery-based and targeted quantitative proteomics can be achieved in a single quadrupole time-of-flight mass spectrometer platform (TripleTOF 5600 system). Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Characterization of air contaminants formed by the interaction of lava and sea water.

    PubMed

    Kullman, G J; Jones, W G; Cornwell, R J; Parker, J E

    1994-05-01

    We made environmental measurements to characterize contaminants generated when basaltic lava from Hawaii's Kilauea volcano enters sea water. This interaction of lava with sea water produces large clouds of mist (LAZE). Island winds occasionally directed the LAZE toward the adjacent village of Kalapana and the Hawaii Volcanos National Park, creating health concerns. Environmental samples were taken to measure airborne concentrations of respirable dust, crystalline silica and other mineral compounds, fibers, trace metals, inorganic acids, and organic and inorganic gases. The LAZE contained quantifiable concentrations of hydrochloric acid (HCl) and hydrofluoric acid (HF); HCl was predominant. HCl and HF concentrations were highest in dense plumes of LAZE near the sea. The HCl concentration at this sampling location averaged 7.1 ppm; this exceeds the current occupational exposure ceiling of 5 ppm. HF was detected in nearly half the samples, but all concentrations were <1 ppm Sulfur dioxide was detected in one of four short-term indicator tube samples at approximately 1.5 ppm. Airborne particulates were composed largely of chloride salts (predominantly sodium chloride). Crystalline silica concentrations were below detectable limits, less than approximately 0.03 mg/m3 of air. Settled dust samples showed a predominance of glass flakes and glass fibers. Airborne fibers were detected at quantifiable levels in 1 of 11 samples. These fibers were composed largely of hydrated calcium sulfate. These findings suggest that individuals should avoid concentrated plumes of LAZE near its origin to prevent over exposure to inorganic acids, specifically HCl.

  20. Matrix Extension and Multilaboratory Validation of Arsenic Speciation Method EAM §4.10 to Include Wine.

    PubMed

    Tanabe, Courtney K; Hopfer, Helene; Ebeler, Susan E; Nelson, Jenny; Conklin, Sean D; Kubachka, Kevin M; Wilson, Robert A

    2017-05-24

    A multilaboratory validation (MLV) was performed to extend the U.S. Food and Drug Administration's (FDA) analytical method Elemental Analysis Manual (EAM) §4.10, High Performance Liquid Chromatography-Inductively Coupled Plasma-Mass Spectrometric Determination of Four Arsenic Species in Fruit Juice, to include wine. Several method modifications were examined to optimize the method for the analysis of dimethylarsinic acid, monomethylarsonic acid, arsenate (AsV), and arsenite (AsIII) in various wine matrices with a range of ethanol concentrations by liquid chromatography-inductively coupled plasma-mass spectrometry. The optimized method was used for the analysis of five wines of different classifications (red, white, sparkling, rosé, and fortified) by three laboratories. Additionally, the samples were fortified in duplicate at levels of approximately 5, 10, and 30 μg kg -1 and analyzed by each participating laboratory. The combined average fortification recoveries of dimethylarsinic acid, monomethylarsonic acid, and inorganic arsenic (iAs the sum of AsV and AsIII) in these samples were 101, 100, and 100%, respectively. To further demonstrate the method, 46 additional wine samples were analyzed. The total As levels of all the wines analyzed in this study were between 1.0 and 38.2 μg kg -1 . The overall average mass balance based on the sum of the species recovered from the chromatographic separation compared to the total As measured was 89% with a range of 51-135%. In the 51 analyzed samples, iAs accounted for an average of 91% of the sum of the species with a range of 37-100%.

  1. Establishment of gold-quartz standard GQS-1

    USGS Publications Warehouse

    Millard, Hugh T.; Marinenko, John; McLane, John E.

    1969-01-01

    A homogeneous gold-quartz standard, GQS-1, was prepared from a heterogeneous gold-bearing quartz by chemical treatment. The concentration of gold in GQS-1 was determined by both instrumental neutron activation analysis and radioisotope dilution analysis to be 2.61?0.10 parts per million. Analysis of 10 samples of the standard by both instrumental neutron activation analysis and radioisotope dilution analysis failed to reveal heterogeneity within the standard. The precision of the analytical methods, expressed as standard error, was approximately 0.1 part per million. The analytical data were also used to estimate the average size of gold particles. The chemical treatment apparently reduced the average diameter of the gold particles by at least an order of magnitude and increased the concentration of gold grains by a factor of at least 4,000.

  2. Pollutant deposition via dew in urban and rural environment, Cracow, Poland

    NASA Astrophysics Data System (ADS)

    Muskała, Piotr; Sobik, Mieczysław; Błaś, Marek; Polkowska, Żaneta; Bokwa, Anita

    2015-01-01

    This study is a comparative analysis of dew in rural and urban environment. Dew samples were collected between May and October, 2009 in two reference stations in southern Poland: Cracow and Gaik-Brzezowa. The investigation included comparison of volume and chemistry of the collected samples. Due to its formation mechanisms, dew is a good indicator of air pollution. Following parameters were analyzed in 159 collected samples: pH, electric conductivity, concentration of formaldehyde and phenols, concentration of NH4+, Ca2 +, K+, Na+, and Mg2 + cations and NO2-, NO3-, SO42 -, Cl-, F-, and PO43 - anions. The frequency of dew was approximately the same, both in urban and rural conditions reaching 43% of the measurement period. Dew intensity, expressed by volume, was on average two times larger in rural environment than in urban conditions. Urban landuse was recognized as the main factor reducing dew intensity in the urban station in comparison to the rural. Furthermore, the intensity of dew depended on synoptic scale air circulation at both measurement sites. As expected, samples collected in Cracow were much more polluted than the ones from Gaik-Brzezowa. The average TIC (Total Ionic Content) parameter was approximately 50% higher at the urban station. The pH in the rural station was more acidic. NO3- anions and Ca2 + cations were predominant in both measurement sites, however the participation of Ca2 + in Cracow was higher. NO3- indicates pollutions emitted by transport and industrial sources. The concentration of the analytes in both stations, as the volume, depended on air circulation direction. For Gaik-Brzezowa the highest TIC was observed mainly within southern circulation, while for Cracow the highest TIC was noted within both northern and southern. In general the rural station represented background pollution for the whole region and the pollution in Cracow was more dependent on local urban sources as transport or industry.

  3. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  4. Observations of the interplanetary magnetic field between 0.46 and 1 A.U. by the Mariner 10 spacecraft. Ph.D. Thesis - Catholic Univ. of Am.

    NASA Technical Reports Server (NTRS)

    Behannon, K. W.

    1976-01-01

    Almost continuous measurement of the interplanetary magnetic field (IMF) at a sampling rate of 25 vectors/sec was performed by the magnetic field experiment onboard the Mariner 10 spacecraft during the period November 3, 1973 to April 14, 1974, comprising approximately 5-2/3 solar rotations and extending in radial distance from the sun from 1 to 0.46 AU. A clearly discernible two-sector pattern of field polarity was observed during the last 3-1/2 months of the period, with the dominant polarity toward the sun below the solar equatorial plane. Two compound high-speed solar wind streams were also present during this period, one in each magnetic field sector. Relative fluctuations of the field in magnitude and direction were found to have large time variations, but on average the relative magnitude fluctuations were approximately constant over the range of heliocentric distance covered while the relative directional fluctuations showed a slight decrease on average with increasing distance. The occurrence rate of directional discontinuities was also found to decrease with increasing radial distance from the sun.

  5. Agricultural chemicals at the outlet of a shallow carbonate aquifer

    USGS Publications Warehouse

    Felton, G.K.

    1996-01-01

    A groundwater catchment, located in Woodford and Jessamine Counties in the Inner Bluegrass of Kentucky, was instrumented to develop long- term flow and water quality data. The land uses on this 1 620-ha catchment consist of approximately 59% in grasses consisting of beef farms, horse farms, and a golf course; 16% row crops; 6% orchard: 13% forest; and 6% residential. Water samples were analyzed twice a week for, Ca++, Mg++, Na+, Cl-, HCO3-, O4=, NO3-, total solids, suspended solids, fecal coliforms, fecal streptococci, and triazines. Flow rate and average ambient temperature were also recorded. No strong linear relationship was developed between chemical concentrations and other parameters. The transient nature of the system was emphasized by one event that drastically deviated from others. Pesticide data were summarized and the 'flushing' phenomena accredited to karst systems was discussed. The total solids content in the spring was consistent at approximately 2.06 mg/L. Fecal bacteria contamination was well above drinking water limits (fecal coliform and fetal streptococci averages were I 700 and 4 300 colony-forming-units/100 mL, respectively) and the temporal variation in bacterial contamination was not linked to any other variable.

  6. Structure of high alumina content Al2O3-SiO2 composition glasses.

    PubMed

    Weber, Richard; Sen, Sabyasachi; Youngman, Randall E; Hart, Robert T; Benmore, Chris J

    2008-12-25

    The structure of binary aluminosilicate glasses containing 60-67 mol % Al2O3 were investigated using high-resolution 27Al NMR and X-ray and neutron diffraction. The glasses were made by aerodynamic levitation of molten oxides. The 67% alumina composition required a cooling rate of approximately 1600 degrees C s(1-) to form glass from submillimeter sized samples. NMR results show that the glasses contain aluminum in 4-, 5-, and 6-fold coordination in the approximate ratio 4:5:1. The average Al coordination increases from 4.57 to 4.73 as the fraction of octahedral Al increases with alumina content. The diffraction results on the 67% composition are consistent with a disordered Al framework with Al ions in a range of coordination environments that are substantially different from those found in the equilibrium crystalline phases. Analysis of the neutron and X-ray structure factors yields an average bond angle of 125 +/- 4 degrees between an Al ion and the adjoining cation via a bridging oxygen. We propose that the structure of the glass is a "transition state" between the alumina-rich liquid and the equilibrium mullite phase that are dominated by 4- and 6-coordinated aluminum ions, respectively.

  7. A Lagrangian View of Stratospheric Trace Gas Distributions

    NASA Technical Reports Server (NTRS)

    Schoeberl, M. R.; Sparling, L.; Dessler, A.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    As a result of photochemistry, some relationship between the stratospheric age-of-air and the amount of tracer contained within an air sample is expected. The existence of such a relationship allows inferences about transport history to be made from observations of chemical tracers. This paper lays down the conceptual foundations for the relationship between age and tracer amount, developed within a Lagrangian framework. In general, the photochemical loss depends not only on the age of the parcel but also on its path. We show that under the "average path approximation" that the path variations are less important than parcel age. The average path approximation then allows us to develop a formal relationship between the age spectrum and the tracer spectrum. Using the relation between the tracer and age spectra, tracer-tracer correlations can be interpreted as resulting from mixing which connects parts of the single path photochemistry curve, which is formed purely from the action of photochemistry on an irreducible parcel. This geometric interpretation of mixing gives rise to constraints on trace gas correlations, and explains why some observations are do not fall on rapid mixing curves. This effect is seen in the ATMOS observations.

  8. Atmospheric Sb in the Arctic during the past 16,000 years: Responses to climate change and human impacts

    NASA Astrophysics Data System (ADS)

    Krachler, Michael; Zheng, Jiancheng; Fisher, David; Shotyk, William

    2008-03-01

    Applying strict clean room procedures and sector field inductively coupled plasma mass spectrometry (ICP-MS) methods, concentrations of Sb and Sc were determined in 57 sections of a 170.6-m-long ice core drilled on Devon Island, Arctic Canada, in 1999, providing a record of atmospheric Sb extending back 15,800 years. Natural background concentrations of Sb and Sc established during the period between 1300 years BP and 10,590 years BP averaged 0.08 ± 0.03 pg/g (N = 18) and 0.44 ± 0.20 pg/g (N = 17), respectively. Scandium, a conservative reference element, was used as a surrogate for mineral dust inputs. The Sb/Sc ratio of 0.13 ± 0.07 in these ancient ice samples is comparable to the corresponding ratio of 0.09 ± 0.03 in peat samples from Switzerland from circa 6000 to 9000 years BP, indicating that this natural background ratio might have a much broader validity. The natural background flux of Sb (0.7 ± 0.5 ng/m2/a) in the Arctic was approximately 500 times lower than that established in central Europe using peat cores. For comparison with background values, modern Sb fluxes calculated using 45 samples from a 5-m snow pit dug on Devon Island in 2004, reflecting 10 years of snow accumulation, yielded an average deposition rate of 340 ± 270 ng/m2/a (range: 20-1240 ng/m2/a) with pronounced accumulation of Sb during winter periods when air masses reaching the Arctic predominantly come from Eurasia. These data reveal that approximately 99.8% of the Sb deposited in the Arctic today originates from anthropogenic activities. Modern Sb enrichment factors averaged 25 (range: 8-121). The ice core provides evidence of Sb contamination dating from Phoenician/Greek, Roman, and medieval lead mining and smelting in Europe. Moreover, the ice core data indicate that anthropogenic sources of Sb have continuously dominated the atmospheric inputs to the Arctic for at least 700 years.

  9. Theoretical model for VITA-educed coherent structures in the wall region of a turbulent boundary layer

    NASA Technical Reports Server (NTRS)

    Landahl, Marten T.

    1988-01-01

    Experiments on wall-bounded shear flows (channel flows and boundary layers) have indicated that the turbulence in the region close to the wall exhibits a characteristic intermittently formed pattern of coherent structures. For a quantitative study of coherent structures it is necessary to make use of conditional sampling. One particularly successful sampling technique is the Variable Integration Time Averaging technique (VITA) first explored by Blackwelder and Kaplan (1976). In this, an event is assumed to occur when the short time variance exceeds a certain threshold multiple of the mean square signal. The analysis presented removes some assumptions in the earlier models in that the effects of pressure and viscosity are taken into account in an approximation based on the assumption that the near-wall structures are highly elongated in the streamwise direction. The appropriateness of this is suggested by the observations but is also self consistent with the results of the model which show that the streamwise dimension of the structure grows with time, so that the approximation should improve with the age of the structure.

  10. Oxygen diffusion in nanocrystalline yttria-stabilized zirconia: the effect of grain boundaries.

    PubMed

    De Souza, Roger A; Pietrowski, Martha J; Anselmi-Tamburini, Umberto; Kim, Sangtae; Munir, Zuhair A; Martin, Manfred

    2008-04-21

    The transport of oxygen in dense samples of yttria-stabilized zirconia (YSZ), of average grain size d approximately 50 nm, has been studied by means of 18O/16O exchange annealing and secondary ion mass spectrometry (SIMS). Oxygen diffusion coefficients (D*) and oxygen surface exchange coefficients (k*) were measured for temperatures 673

  11. An international marine-atmospheric {sup 222}Rn measurement intercomparison in Bermuda. Part 2: Results for the participating laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colle, R.; Unterweger, M.P.; Hutchinson, J.M.R.

    1996-01-01

    As part of an international measurement intercomparison of instruments used to measure atmospheric {sup 222}Rn, four participating laboratories made nearly simultaneous measurements of {sup 222}Rn activity concentration in commonly sampled, ambient air over approximately a 2 week period, and three of these four laboratories participated in the measurement comparison of 14 introduced samples with known, but undisclosed (blind) {sup 222}Rn activity concentration. The exercise was conducted in Bermuda in October 1991. The {sup 222}Rn activity concentrations in ambient Bermudian air over the course of the intercomparison ranged from a few hundredths of a Bq {center_dot} m{sup {minus}3} to about 2more » Bq {center_dot} m{sup {minus}3}, while the standardized sample additions covered a range from approximately 2.5 Bq {center_dot} m{sup {minus}3} to 35 Bq {center_dot} m{sup {minus}3}. The overall uncertainty in the latter concentrations was in the general range of 10%, approximating a 3 standard deviation uncertainty interval. The results of the intercomparison indicated that two of the laboratories were within very good agreement with the standard additions, and almost within expected statistical variations. These same two laboratories, however, at lower ambient concentrations, exhibited a systematic difference with an averaged offset of roughly 0.3 Bq {center_dot} m{sup {minus}3}. The third laboratory participating in the measurement of standardized sample additions was systematically low by about 65% to 70%, with respect to the standard addition which was also confirmed in their ambient air concentration measurements. The fourth laboratory, participating in only the ambient measurement part of the intercomparison, was also systematically low by at least 40% with respect to the first two laboratories.« less

  12. Comparing methods for modelling spreading cell fronts.

    PubMed

    Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E

    2014-07-21

    Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  14. Radionuclides in milk of dairy heifers raised on forages harvested from phosphatic clay soils on reclaimed mined land

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staples, C.R.; Umana, R.; Hayen, M.J.

    1994-07-01

    Alfalfa (AR; Medicago sativa L.) and corn (CSR; Zea mays L.) were grown in phosphatic clay soils on phosphate-mined reclaimed land in central Florida. Corn (CSC) also was grown on unmined land and served as a control forage. Upon harvesting, plants were chopped and ensiled. Concentrations of {sup 226}Ra averaged 2.44, 0.26 and 0.15; {sup 210}Pb averaged 1.04, 0.63, and 0.52; and {sup 210}Po averaged 1.59, 0.59, and 1.26 Bq kg{sup -1} DM for AR, CSR, and CSC, respectively. These forages were fed separately to Holstein dairy replacement heifers (Bos taurus) (n=15 per forage) from approximately 9 to 25 momore » of age. Heifers gave birth to calves at approximately 24 mo of age. Samples of milk were collected on d 1, 15, and 30 of lactation and analyzed for radionuclides. Averaged across sampling days, heifers fed AR had greater milk concentrations of {sup 226}Ra compared with those fed CSR (0.27 vs. 0.22 Bq kg{sup -1} DM; P < 0.10), which, in turn, had greater milk concentrations compared with heifers fed CSC (0.22 vs. 0.13 Bq kg{sup -1} DM; P < 0.05). Heifers fed AR also had greater milk concentrations of {sup 210}Po compared with heifers fed CSR (0.58 vs. 0.30 Bq kg{sup -1} DM; P < 0.10), but values of CSR-fed heifers were not different from CSC-fed heifers (0.45 Bq kg{sup -1} DM). Lead-210 was greater in milk from heifers fed CSR compared with those fed AR or CSC (1.38 vs. 0.94 and 0.92 Bq kg{sup -1} DM; P < 0.13), respectively. Plasma S and Cu concentrations suggested subclinical molybdenosis in heifers fed AR. However, all heifers grew at an acceptable rate, conceived normally, had normal gestation periods, gave high quality colostrum at calving, and produced similar amounts of milk. 17 refs., 9 tabs.« less

  15. Communication system with adaptive noise suppression

    NASA Technical Reports Server (NTRS)

    Kozel, David (Inventor); Devault, James A. (Inventor); Birr, Richard B. (Inventor)

    2007-01-01

    A signal-to-noise ratio dependent adaptive spectral subtraction process eliminates noise from noise-corrupted speech signals. The process first pre-emphasizes the frequency components of the input sound signal which contain the consonant information in human speech. Next, a signal-to-noise ratio is determined and a spectral subtraction proportion adjusted appropriately. After spectral subtraction, low amplitude signals can be squelched. A single microphone is used to obtain both the noise-corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoiced frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Spectral subtraction may be performed on a composite noise-corrupted signal, or upon individual sub-bands of the noise-corrupted signal. Pre-averaging of the input signal's magnitude spectrum over multiple time frames may be performed to reduce musical noise.

  16. Characterization of complexity in the electroencephalograph activity of Alzheimer's disease based on fuzzy entropy.

    PubMed

    Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing

    2015-08-01

    In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.

  17. Characterization of complexity in the electroencephalograph activity of Alzheimer's disease based on fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing

    2015-08-01

    In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.

  18. A revised radiation package of G-packed McICA and two-stream approximation: Performance evaluation in a global weather forecasting model

    NASA Astrophysics Data System (ADS)

    Baek, Sunghye

    2017-07-01

    For more efficient and accurate computation of radiative flux, improvements have been achieved in two aspects, integration of the radiative transfer equation over space and angle. First, the treatment of the Monte Carlo-independent column approximation (MCICA) is modified focusing on efficiency using a reduced number of random samples ("G-packed") within a reconstructed and unified radiation package. The original McICA takes 20% of CPU time of radiation in the Global/Regional Integrated Model systems (GRIMs). The CPU time consumption of McICA is reduced by 70% without compromising accuracy. Second, parameterizations of shortwave two-stream approximations are revised to reduce errors with respect to the 16-stream discrete ordinate method. Delta-scaled two-stream approximation (TSA) is almost unanimously used in Global Circulation Model (GCM) but contains systematic errors which overestimate forward peak scattering as solar elevation decreases. These errors are alleviated by adjusting the parameterizations of each scattering element—aerosol, liquid, ice and snow cloud particles. Parameterizations are determined with 20,129 atmospheric columns of the GRIMs data and tested with 13,422 independent data columns. The result shows that the root-mean-square error (RMSE) over the all atmospheric layers is decreased by 39% on average without significant increase in computational time. Revised TSA developed and validated with a separate one-dimensional model is mounted on GRIMs for mid-term numerical weather forecasting. Monthly averaged global forecast skill scores are unchanged with revised TSA but the temperature at lower levels of the atmosphere (pressure ≥ 700 hPa) is slightly increased (< 0.5 K) with corrected atmospheric absorption.

  19. Characterization of exposures to airborne nanoscale particles during friction stir welding of aluminum.

    PubMed

    Pfefferkorn, Frank E; Bello, Dhimiter; Haddad, Gilbert; Park, Ji-Young; Powell, Maria; McCarthy, Jon; Bunker, Kristin Lee; Fehrenbacher, Axel; Jeon, Yongho; Virji, M Abbas; Gruetzmacher, George; Hoover, Mark D

    2010-07-01

    Friction stir welding (FSW) is considered one of the most significant developments in joining technology over the last half century. Its industrial applications are growing steadily and so are the number of workers using this technology. To date, there are no reports on airborne exposures during FSW. The objective of this study was to investigate possible emissions of nanoscale (<100 nm) and fine (<1 microm) aerosols during FSW of two aluminum alloys in a laboratory setting and characterize their physicochemical composition. Several instruments measured size distributions (5 nm to 20 microm) with 1-s resolution, lung deposited surface areas, and PM(2.5) concentrations at the source and at the breathing zone (BZ). A wide range aerosol sampling system positioned at the BZ collected integrated samples in 12 stages (2 nm to 20 microm) that were analyzed for several metals using inductively coupled plasma mass spectrometry. Airborne aerosol was directly collected onto several transmission electron microscope grids and the morphology and chemical composition of collected particles were characterized extensively. FSW generates high concentrations of ultrafine and submicrometer particles. The size distribution was bimodal, with maxima at approximately 30 and approximately 550 nm. The mean total particle number concentration at the 30 nm peak was relatively stable at approximately 4.0 x 10(5) particles cm(-3), whereas the arithmetic mean counts at the 550 nm peak varied between 1500 and 7200 particles cm(-3), depending on the test conditions. The BZ concentrations were lower than the source concentrations by 10-100 times at their respective peak maxima and showed higher variability. The daylong average metal-specific concentrations were 2.0 (Zn), 1.4 (Al), and 0.24 (Fe) microg m(-3); the estimated average peak concentrations were an order of magnitude higher. Potential for significant exposures to fine and ultrafine aerosols, particularly of Al, Fe, and Zn, during FSW may exist, especially in larger scale industrial operations.

  20. Multiphoton near-infrared femtosecond laser pulse-induced DNA damage with and without the photosensitizer proflavine.

    PubMed

    Shafirovich, V; Dourandin, A; Luneva, N P; Singh, C; Kirigin, F; Geacintov, N E

    1999-03-01

    The excitation of pBr322 supercoiled plasmid DNA with intense near-IR 810 nm fs laser pulses by a simultaneous multiphoton absorption mechanism results in single-strand breaks after treatment of the irradiated samples with Micrococcus luteus UV endonuclease. This enzyme cleaves DNA strands at sites of cyclobutane dimers that are formed by the simultaneous absorption of three (or more) 810 nm IR photons (pulse width approximately 140 fs, 76 MHz pulse repetition, average power output focused through 10x microscope objective is approximately 1.2 MW/cm2). Direct single-strand breaks (without treatment with M. luteus) were not observed under these conditions. However, in the presence of 6 microM of the intercalator proflavine (PF), both direct single- and double-strand breaks are observed under conditions where substantial fractions of undamaged supercoiled DNA molecules are still present. The fraction of direct double-strand breaks is 30 +/- 5% of all measurable strand cleavage events, is independent of dosage (up to 6.4 GJ/cm2) and is proportional to In, where I is the average power/area of the 810 nm fs laser pulses, and n = 3 +/- 1. The nicking of two DNA strands in the immediate vicinity of the excited PF molecules gives rise to this double-strand cleavage. In contrast, excitation of the same samples under low-power, single-photon absorption conditions (approximately 400-500 nm) gives rise predominantly to single-strand breaks, but some double-strand breaks are observed at the higher dosages. Thus, single-photon excitation with 400-500 nm light and multiphoton activation of PF by near-IR fs laser pulses produces different distributions of single- and double-strand breaks. These results suggest that DNA strand cleavage originates from unrelaxed, higher excited states when PF is excited by simultaneous IR multiphoton absorption processes.

  1. On the construction of a time base and the elimination of averaging errors in proxy records

    NASA Astrophysics Data System (ADS)

    Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

    2009-04-01

    Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The measured averaged proxy signal is modeled by following signal model: -- Δ ∫ n+12Δδ- y(n,θ) = δ- 1Δ- y(m,θ)dm n-2 δ where m is the position, x(m) = Δm; θ are the unknown parameters and y(m,θ) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, θ) = A +∑H [A sin(kωt(m ))+ A cos(kωt(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ∑ g(m ) = b blφl(m ) l=1 where, b is a vector of unknown time base distortion parameters, and φ is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.

  2. Lower molar and incisor displacement associated with mandibular remodeling.

    PubMed

    Baumrind, S; Bravo, L A; Ben-Bassat, Y; Curry, S; Korn, E L

    1997-01-01

    The purpose of this study was to quantify the amount of alveolar modeling at the apices of the mandibular incisor and first molar specifically associated with appositional and resorptive changes on the lower border of the mandible during growth and treatment. Cephalometric data from superimpositions on anterior cranial base, mandibular implants of the Björk type, and anatomical "best fit" of mandibular border structures were integrated using a recently developed strategy, which is described. Data were available at annual intervals between 8.5 and 15.5 years for a previously described sample of approximately 30 children with implants. The average magnitudes of the changes at the root apices of the mandibular first molar and central incisor associated with modeling/remodeling of the mandibular border and symphysis were unexpectedly small. At the molar apex, mean values approximated zero in both anteroposterior and vertical directions. At the incisor apex, mean values approximated zero in the anteroposterior direction and averaged less than 0.15 mm/year in the vertical direction. Standard deviations were roughly equal for the molar and the incisor in both the anteroposterior and vertical directions. Dental displacement associated with surface modeling plays a smaller role in final tooth position in the mandible than in the maxilla. It may also be reasonably inferred that anatomical best-fit superimpositions made in the absence of implants give a more complete picture of hard tissue turnover in the mandible than they do in the maxilla.

  3. Mineralogy and instrumental neutron activation analysis of seven National Bureau of Standards and three Instituto de Pesquisas Tecnologicas clay reference samples

    USGS Publications Warehouse

    Hosterman, John W.; Flanagan, F.J.; Bragg, Anne; Doughten, M.W.; Filby, R.H.; Grimm, Catherine; Mee, J.S.; Potts, P.J.; Rogers, N.W.

    1987-01-01

    The concentrations of 3 oxides and 29 elements in 7 National Bureau of Standards (NBS) and 3 Instituto de Pesquisas Techno16gicas (IPT) reference clay samples were etermined by instrumental neutron activation analysis. The analytical work was designed to test the homogeneity of constituents in three new NBS reference clays, NBS-97b, NBS-98b, and NBS-679. The analyses of variance of 276 sets of data for these three standards show that the constituents are distributed homogeneously among bottles of samples for 94 percent of the sets of data. Three of the reference samples (NBS-97, NBS-97a, and NBS-97b) are flint clays; four of the samples (NBS-98, NBS-98a, NBS-98b, and IPT-32) are plastic clays, and three of the samples (NBS-679, IPT-28, and IPT-42) are miscellaneous clays (both sedimentary and residual). Seven clays are predominantly kaolinite; the other three clays contain illite and kaolinite in the approximate ratio 3:2. Seven clays contain quartz as the major nonclay mineral. The mineralogy of the flint and plastic clays from Missouri (NBS-97a and NBS-98a) differs markedly from that of the flint and plastic clays from Pennsylvania (NBS-97, NBS-97b, NBS-98, and NBS-98b). The flint clay NBS-97 has higher average chromium, hafnium, lithium, and zirconium contents than its replacement, reference sample NBS-97b. The differences between the plastic clay NBS-98 and its replacement, NBS-98b, are not as pronounced. The trace element contents of the flint and plastic clays from Missouri, NBS-97a and NBS-98a, differ significantly from those of the clays from Pennsylvania, especially the average rare earth element (REE) contents. The trace element contents of clay sample IPT-32 differ from those of the other plastic clays. IPT-28 and IPT-42 have some average trace element contents that differ not only between these two samples but also from all the other clays. IPT-28 has the highest summation of the average REE contents of the 10 samples. The uranium content of NBS-98a, 46 parts per million, is very much higher than that of the other clays. Plots of average REE contents of the flint and plastic clays, normalized to chondritic abundances, show that the clays from Missouri differ from the same types of clay from Pennsylvania. The plot of REE contents for the miscellaneous clays shows that the normalized means for the elements lanthanum through samarium for IPT-28 are much greater than those for the other miscellaneous clays. The means for the elements europium through lutetium are similar for all three miscellaneous clays.

  4. Environmental and health consequences of depleted uranium use in the 1991 Gulf War.

    PubMed

    Bem, Henryk; Bou-Rabee, Firyal

    2004-03-01

    Depleted uranium (DU) is a by-product of the 235U radionuclide enrichment processes for nuclear reactors or nuclear weapons. DU in the metallic form has high density and hardness as well as pyrophoric properties, which makes it superior to the classical tungsten armour-piercing munitions. Military use of DU has been recently a subject of considerable concern, not only to radioecologists but also public opinion in terms of possible health hazards arising from its radioactivity and chemical toxicity. In this review, the results of uranium content measurements in different environmental samples performed by authors in Kuwait after Gulf War are presented with discussion concerning possible environmental and health effects for the local population. It was found that uranium concentration in the surface soil samples ranged from 0.3 to 2.5 microg g(-1) with an average value of 1.1 microg g(-1), much lower than world average value of 2.8 microg g(-1). The solid fallout samples showed similar concentrations varied from 0.3 to 1.7 microg g(-1) (average 1.47 microg g(-1)). Only the average concentration of U in solid particulate matter in surface air equal to 0.24 ng g(-1) was higher than the usually observed values of approximately 0.1 ng g(-1) but it was caused by the high dust concentration in the air in that region. Calculated on the basis of these measurements, the exposure to uranium for the Kuwait and southern Iraq population does not differ from the world average estimation. Therefore, the widely spread information in newspapers and Internet (see for example: [CADU NEWS, 2003. http://www.cadu.org.uk/news/index.htm (3-13)]) concerning dramatic health deterioration for Iraqi citizens should not be linked directly with their exposure to DU after the Gulf War.

  5. Comprehensive Genomic Profiling of Esthesioneuroblastoma Reveals Additional Treatment Options.

    PubMed

    Gay, Laurie M; Kim, Sungeun; Fedorchak, Kyle; Kundranda, Madappa; Odia, Yazmin; Nangia, Chaitali; Battiste, James; Colon-Otero, Gerardo; Powell, Steven; Russell, Jeffery; Elvin, Julia A; Vergilio, Jo-Anne; Suh, James; Ali, Siraj M; Stephens, Philip J; Miller, Vincent A; Ross, Jeffrey S

    2017-07-01

    Esthesioneuroblastoma (ENB), also known as olfactory neuroblastoma, is a rare malignant neoplasm of the olfactory mucosa. Despite surgical resection combined with radiotherapy and adjuvant chemotherapy, ENB often relapses with rapid progression. Current multimodality, nontargeted therapy for relapsed ENB is of limited clinical benefit. We queried whether comprehensive genomic profiling (CGP) of relapsed or refractory ENB can uncover genomic alterations (GA) that could identify potential targeted therapies for these patients. CGP was performed on formalin-fixed, paraffin-embedded sections from 41 consecutive clinical cases of ENBs using a hybrid-capture, adaptor ligation based next-generation sequencing assay to a mean coverage depth of 593X. The results were analyzed for base substitutions, insertions and deletions, select rearrangements, and copy number changes (amplifications and homozygous deletions). Clinically relevant GA (CRGA) were defined as GA linked to drugs on the market or under evaluation in clinical trials. A total of 28 ENBs harbored GA, with a mean of 1.5 GA per sample. Approximately half of the ENBs (21, 51%) featured at least one CRGA, with an average of 1 CRGA per sample. The most commonly altered gene was TP53 (17%), with GA in PIK3CA , NF1 , CDKN2A , and CDKN2C occurring in 7% of samples. We report comprehensive genomic profiles for 41 ENB tumors. CGP revealed potential new therapeutic targets, including targetable GA in the mTOR, CDK and growth factor signaling pathways, highlighting the clinical value of genomic profiling in ENB. Comprehensive genomic profiling of 41 relapsed or refractory ENBs reveals recurrent alterations or classes of mutation, including amplification of tyrosine kinases encoded on chromosome 5q and mutations affecting genes in the mTOR/PI3K pathway. Approximately half of the ENBs (21, 51%) featured at least one clinically relevant genomic alteration (CRGA), with an average of 1 CRGA per sample. The most commonly altered gene was TP53 (17%), and alterations in PIK3CA , NF1 , CDKN2A , or CDKN2C were identified in 7% of samples. Responses to treatment with the kinase inhibitors sunitinib, everolimus, and pazopanib are presented in conjunction with tumor genomics. © AlphaMed Press 2017.

  6. Customer exposure to MTBE, TAME, C6 alkyl methyl ethers, and benzene during gasoline refueling.

    PubMed

    Vainiotalo, S; Peltonen, Y; Ruonakangas, A; Pfäffli, P

    1999-02-01

    We studied customer exposure during refueling by collecting air samples from customers' breathing zone. The measurements were carried out during 4 days in summer 1996 at two Finnish self-service gasoline stations with "stage I" vapor recovery systems. The 95-RON (research octane number) gasoline contained approximately 2.7% methyl tert-butyl ether (MTBE), approximately 8.5% tert-amyl methyl ether (TAME), approximately 3.2% C6 alkyl methyl ethers (C6 AMEs), and 0.75% benzene. The individual exposure concentrations showed a wide log-normal distribution, with low exposures being the most frequent. In over 90% of the samples, the concentration of MTBE was higher (range <0.02-51 mg/m3) than that of TAME. The MTBE values were well below the short-term (15 min) threshold limits set for occupational exposure (250-360 mg/m3). At station A, the geometric mean concentrations in individual samples were 3.9 mg/m3 MTBE and 2. 2 mg/m3 TAME. The corresponding values at station B were 2.4 and 1.7 mg/m3, respectively. The average refueling (sampling) time was 63 sec at station A and 74 sec at station B. No statistically significant difference was observed in customer exposures between the two service stations. The overall geometric means (n = 167) for an adjusted 1-min refueling time were 3.3 mg/m3 MTBE and 1.9 mg/m3 TAME. Each day an integrated breathing zone sample was also collected, corresponding to an arithmetic mean of 20-21 refuelings. The overall arithmetic mean concentrations in the integrated samples (n = 8) were 0.90 mg/m3 for benzene and 0.56 mg/m3 for C6 AMEs calculated as a group. Mean MTBE concentrations in ambient air (a stationary point in the middle of the pump island) were 0.16 mg/m3 for station A and 0.07 mg/m3 for station B. The mean ambient concentrations of TAME, C6 AMEs, and benzene were 0.031 mg/m3, approximately 0.005 mg/m3, and approximately 0.01 mg/m3, respectively, at both stations. The mean wind speed was 1.4 m/sec and mean air temperature was 21 degreesC. Of the gasoline refueled during the study, 75% was 95 grade and 25% was 98/99 grade, with an oxygenate (MTBE) content of 12.2%.

  7. Customer exposure to MTBE, TAME, C6 alkyl methyl ethers, and benzene during gasoline refueling.

    PubMed Central

    Vainiotalo, S; Peltonen, Y; Ruonakangas, A; Pfäffli, P

    1999-01-01

    We studied customer exposure during refueling by collecting air samples from customers' breathing zone. The measurements were carried out during 4 days in summer 1996 at two Finnish self-service gasoline stations with "stage I" vapor recovery systems. The 95-RON (research octane number) gasoline contained approximately 2.7% methyl tert-butyl ether (MTBE), approximately 8.5% tert-amyl methyl ether (TAME), approximately 3.2% C6 alkyl methyl ethers (C6 AMEs), and 0.75% benzene. The individual exposure concentrations showed a wide log-normal distribution, with low exposures being the most frequent. In over 90% of the samples, the concentration of MTBE was higher (range <0.02-51 mg/m3) than that of TAME. The MTBE values were well below the short-term (15 min) threshold limits set for occupational exposure (250-360 mg/m3). At station A, the geometric mean concentrations in individual samples were 3.9 mg/m3 MTBE and 2. 2 mg/m3 TAME. The corresponding values at station B were 2.4 and 1.7 mg/m3, respectively. The average refueling (sampling) time was 63 sec at station A and 74 sec at station B. No statistically significant difference was observed in customer exposures between the two service stations. The overall geometric means (n = 167) for an adjusted 1-min refueling time were 3.3 mg/m3 MTBE and 1.9 mg/m3 TAME. Each day an integrated breathing zone sample was also collected, corresponding to an arithmetic mean of 20-21 refuelings. The overall arithmetic mean concentrations in the integrated samples (n = 8) were 0.90 mg/m3 for benzene and 0.56 mg/m3 for C6 AMEs calculated as a group. Mean MTBE concentrations in ambient air (a stationary point in the middle of the pump island) were 0.16 mg/m3 for station A and 0.07 mg/m3 for station B. The mean ambient concentrations of TAME, C6 AMEs, and benzene were 0.031 mg/m3, approximately 0.005 mg/m3, and approximately 0.01 mg/m3, respectively, at both stations. The mean wind speed was 1.4 m/sec and mean air temperature was 21 degreesC. Of the gasoline refueled during the study, 75% was 95 grade and 25% was 98/99 grade, with an oxygenate (MTBE) content of 12.2%. Images Figure 1 Figure 2 Figure 3 Figure 4 PMID:9924009

  8. Do it yourself: optical spectrometer for physics undergraduate instruction in nanomaterial characterization

    NASA Astrophysics Data System (ADS)

    Yeti Nuryantini, Ade; Cahya Septia Mahen, Ea; Sawitri, Asti; Wahid Nuryadin, Bebeh

    2017-09-01

    In this paper, we report on a homemade optical spectrometer using diffraction grating and image processing techniques. This device was designed to produce spectral images that could then be processed by measuring signal strength (pixel intensity) to obtain the light source, transmittance, and absorbance spectra of the liquid sample. The homemade optical spectrometer consisted of: (i) a white LED as a light source, (ii) a cuvette or sample holder, (iii) a slit, (iv) a diffraction grating, and (v) a CMOS camera (webcam). In this study, various concentrations of a carbon nanoparticle (CNP) colloid were used in the particle size sample test. Additionally, a commercial optical spectrometer and tunneling electron microscope (TEM) were used to characterize the optical properties and morphology of the CNPs, respectively. The data obtained using the homemade optical spectrometer, commercial optical spectrometer, and TEM showed similar results and trends. Lastly, the calculation and measurement of CNP size were performed using the effective mass approximation (EMA) and TEM. These data showed that the average nanoparticle sizes were approximately 2.4 nm and 2.5 ± 0.3 nm, respectively. This research provides new insights into the development of a portable, simple, and low-cost optical spectrometer that can be used in nanomaterial characterization for physics undergraduate instruction.

  9. Scalable randomized benchmarking of non-Clifford gates

    NASA Astrophysics Data System (ADS)

    Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay

    Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.

  10. Rates of spontaneous mutation among RNA viruses.

    PubMed Central

    Drake, J W

    1993-01-01

    Simple methods are presented to estimate rates of spontaneous mutation from mutant frequencies and population parameters in RNA viruses. Published mutant frequencies yield a wide range of mutation rates per genome per replication, mainly because mutational targets have usually been small and, thus, poor samples of the mutability of the average base. Nevertheless, there is a clear central tendency for lytic RNA viruses (bacteriophage Q beta, poliomyelitis, vesicular stomatitis, and influenza A) to display rates of spontaneous mutation of approximately 1 per genome per replication. This rate is some 300-fold higher than previously reported for DNA-based microbes. Lytic RNA viruses thus mutate at a rate close to the maximum value compatible with viability. Retroviruses (spleen necrosis, murine leukemia, Rous sarcoma), however, mutate at an average rate about an order of magnitude lower than lytic RNA viruses. PMID:8387212

  11. Methodology to determine the extent of anaerobic digestion, composting and CH4 oxidation in a landfill environment.

    PubMed

    Obersky, Lizanne; Rafiee, Reza; Cabral, Alexandre R; Golding, Suzanne D; Clarke, William P

    2018-06-01

    An examination of the processes contributing to the production of landfill greenhouse gas (GHG) emissions is required, as the actual level to which waste degrades anaerobically and aerobically beneath covers has not been differentiated. This paper presents a methodology to distinguish between the rate of anaerobic digestion (r AD ), composting (r COM ) and CH 4 oxidation (r OX ) in a landfill environment, by means of a system of mass balances developed for molecular species (CH 4 , CO 2 ) and stable carbon isotopes (δ 13 C-CO 2 and δ 13 C-CH 4 ). The technique was applied at two sampling locations on a sloped area of landfill. Four sampling rounds were performed over an 18 month period after a 1.0 m layer of fresh waste and 30-50 cm of silty clay loam had been placed over the area. Static chambers were used to measure the flux of the molecular and isotope species at the surface and soil gas probes were used to collect gas samples at depths of approximately 0.5, 1.0 and 1.5 m. Mass balances were based on the surface flux and the concentration of the molecular and isotopic species at the deepest sampling depth. The sensitivity of calculated rates was considered by randomly varying stoichiometric and isotopic parameters by ±5% to generate at least 500 calculations of r OX , r AD and r COM for each location in each sampling round. The resulting average value of r AD and r COM indicated anaerobic digestion and composting were equally dominant at both locations. Average values of r COM : ranged from 9.8 to 44.5 g CO 2 m -2  d -1 over the four sampling rounds, declining monotonically at one site and rising then falling at the other. Average values of r AD: ranged from 10.6 to 45.3 g CO 2 m -2  d -1 . Although the highest average r AD value occurred in the initial sampling round, all subsequent r AD values fell between 10 and 20 g CO 2 m -2  d -1 . r OX had the smallest activity contribution at both sites, with averages ranging from 1.6 to 8.6 g CO 2 m -2  d -1 . This study has demonstrated that for an interim cover, composting and anaerobic digestion of shallow landfill waste can occur simultaneously. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. A workforce survey of Australian osteopathy: analysis of a nationally-representative sample of osteopaths from the Osteopathy Research and Innovation Network (ORION) project.

    PubMed

    Adams, Jon; Sibbritt, David; Steel, Amie; Peng, Wenbo

    2018-05-10

    Limited information is available regarding the profile and clinical practice characteristics of the osteopathy workforce in Australia. This paper reports such information by analysing data from a nationally-representative sample of Australian osteopaths. Data was obtained from a workforce survey of Australian osteopathy, investigating the characteristics of the practitioner, their practice, clinical management features and perceptions regarding research. The survey questionnaire was distributed to all registered osteopaths across Australia in 2016 as part of the Osteopathy Research and Innovation Network (ORION) project. A total of 992 Australian osteopaths participated in this study representing a response rate of 49.1%. The average age of the participants was 38.0 years with 58.1% being female and the majority holding a Bachelor or higher degree qualification related to the osteopathy professional. Approximately 80.0% of the osteopaths were practicing in an urban area, with most osteopaths working in multi-practitioner locations, having referral relationships with a range of health care practitioners, managing patients a number of musculoskeletal disorders, and providing multi-model treatment options. A total of 3.9 million patients were estimated to consult with osteopaths every year and an average of approximate 3.0 million hours were spent delivering osteopathy services per year. Further research is required to provide rich, in-depth examination regarding a range of osteopathy workforce issues which will help ensure safe, effective patient care to all receiving and providing treatments as part of the broader Australian health system.

  13. Smoking cessation and subsequent weight change.

    PubMed

    Robertson, Lindsay; McGee, Rob; Hancox, Robert J

    2014-06-01

    People who quit smoking tend to gain more weight over time than those who continue to smoke. Previous research using clinical samples of smokers suggests that quitters typically experience a weight gain of approximately 5 kg in the year following smoking cessation, but these studies may overestimate the extent of weight gain in the general population. The existing population-based research in this area has some methodological limitations. We assessed a cohort of individuals born in Dunedin, New Zealand, between 1972-1973 at regular intervals from age 15 to 38. We used multiple linear regression analysis to investigate the association between smoking cessation at ages 21 years to 38 years and subsequent change in body mass index (BMI) and weight, controlling for baseline BMI, socioeconomic status, physical activity, alcohol use, and parity (women). Smoking status and outcome data were available at baseline and at follow-up for 914 study members. People who smoked at age 21 and who had quit by age 38 had a BMI on average 1.5 kg/m(2) greater than those who continued to smoke at age 38. This equated to a weight gain of approximately 5.7 kg in men and 5.1 kg in women above that of continuing smokers. However, the weight gain between age 21 and 38 among quitters was not significantly different to that of never-smokers. The amount of long-term weight gained after quitting smoking is likely to be lower than previous estimates based on research with clinical samples. On average, quitters do not experience greater weight gain than never-smokers.

  14. The neutron guide upgrade of the TOSCA spectrometer

    NASA Astrophysics Data System (ADS)

    Pinna, Roberto S.; Rudić, Svemir; Parker, Stewart F.; Armstrong, Jeff; Zanetti, Matteo; Škoro, Goran; Waller, Simon P.; Zacek, Daniel; Smith, Clive A.; Capstick, Matthew J.; McPhail, David J.; Pooley, Daniel E.; Howells, Gareth D.; Gorini, Giuseppe; Fernandez-Alonso, Felix

    2018-07-01

    The primary flightpath of the TOSCA indirect geometry neutron spectrometer has been upgraded with a high-m 14.636 m (including 0.418 m of air gaps) neutron guide composed of ten sections in order to boost the neutron flux at the sample position. The upgraded incident neutron beam has been characterised with the help of the time-of-flight neutron monitor; the beam profile and the gain in the neutron flux data are presented. At an average proton current-on-target of 160 μA and proton energy of 800 MeV (ISIS Target Station 1; at the time of the measurements) we have found that the wavelength-integrated neutron flux (from 0.28 Å to 4.65 Å) at the position of the TOSCA instrument sample (spatially averaged across a 3.0 × 3.0 cm2 surface centred around the (0,0) position) is approximately 2.11 × 107 neutrons cm-2 s-1 while the gain in the neutron flux is as much as 46-fold for neutrons with a wavelength of 2.5 Å. The instrument's excellent spectral resolution and low spectral background have been preserved upon the upgrade. The much improved count rate allows faster measurements where useful data of hydrogen rich samples can be recorded within minutes, as well as experiments involving smaller samples that were not possible in the past.

  15. Average focal length and power of a section of any defined surface.

    PubMed

    Kaye, Stephen B

    2010-04-01

    To provide a method to allow calculation of the average focal length and power of a lens through a specified meridian of any defined surface, not limited to the paraxial approximations. University of Liverpool, Liverpool, United Kingdom. Functions were derived to model back-vertex focal length and representative power through a meridian containing any defined surface. Average back-vertex focal length was based on the definition of the average of a function, using the angle of incidence as an independent variable. Univariate functions allowed determination of average focal length and power through a section of any defined or topographically measured surface of a known refractive index. These functions incorporated aberrations confined to the section. The proposed method closely approximates the average focal length, and by inference power, of a section (meridian) of a surface to a single or scalar value. It is not dependent on the paraxial and other nonconstant approximations and includes aberrations confined to that meridian. A generalization of this method to include all orthogonal and oblique meridians is needed before a comparison with measured wavefront values can be made. Copyright (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  16. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach

    PubMed Central

    Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric

    2016-01-01

    Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927

  17. Characterization of carbonaceous species of ambient PM2.5 in Beijing, China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fumo Yang; Kebin He; Yongliang Ma

    2005-07-01

    One-week integrated fine particulate matter (i.e., particles {lt}2.5 {mu}m in diameter; PM2.5) samples were collected continuously with a low-flow rate sampler at a downtown site (Chegongzhuang) and a residential site (Tsinghua University) in Beijing between July 1999 and June 2000. The annual average concentrations of organic carbon (OC) and elemental carbon (EC) at the urban site were 23.9 and 8.8 {mu}g m{sup -3}, much higher than those in some cities with serious air pollution. Similar weekly variations of OC and EC concentrations were found for the two sampling sites with higher concentrations in the winter and autumn. The highest weeklymore » variations of OC and EC occurred in the winter, suggesting that combustion sources for space heating were important contributors to carbonaceous particles, along with a significant impact from variable meteorological conditions. High emissions coupled with unfavorable meteorological conditions led to the maximum weekly carbonaceous concentration the week of November 18-25, 1999. The weekly mass ratios of OC:EC ranged between 2 and 4 for most samples and averaged 2.9, probably suggesting that secondary OC (SOC) is present most weeks. The range of contemporary carbon fraction, based on the C14 analyses of eight samples collected in 2001, is 0.330-0.479. Estimated SOC accounted for {approximately}38% of the total OC at the two sites. Average OC and EC concentrations at Tsinghua University were 25% and 18%, respectively, higher than those at Chegongzhuang, which could be attributed to different local emissions of primary carbonaceous particles and gaseous precursors of SOC, as well as different summer photochemical intensities between the two locations. Main carbonaceous sources are from coal combustion, vehicles and cooking. 44 refs., 5 figs., 2 tabs.« less

  18. Characterization of carbonaceous species of ambient PM2.5 in Beijing, China.

    PubMed

    Yang, Fumo; He, Kebin; Ma, Yongliang; Zhang, Qiang; Cadle, Steven H; Chan, Tai; Mulawa, Patricia A

    2005-07-01

    One-week integrated fine particulate matter (i.e., particles <2.5 microm in diameter; PM2.5) samples were collected continuously with a low-flow rate sampler at a downtown site (Chegongzhuang) and a residential site (Tsinghua University) in Beijing between July 1999 and June 2000. The annual average concentrations of organic carbon (OC) and elemental carbon (EC) at the urban site were 23.9 and 8.8 microg m(-3), much higher than those in some cities with serious air pollution. Similar weekly variations of OC and EC concentrations were found for the two sampling sites with higher concentrations in the winter and autumn. The highest weekly variations of OC and EC occurred in the winter, suggesting that combustion sources for space heating were important contributors to carbonaceous particles, along with a significant impact from variable meteorological conditions. High emissions coupled with unfavorable meteorological conditions led to the max weekly carbonaceous concentration the week of November 18-25, 1999. The weekly mass ratios of OC:EC ranged between 2 and 4 for most samples and averaged 2.9, probably suggesting that secondary OC (SOC) is present most weeks. The range of contemporary carbon fraction, based on the C14 analyses of eight samples collected in 2001, is 0.330-0.479. Estimated SOC accounted for approximately 38% of the total OC at the two sites. Average OC and EC concentrations at Tsinghua University were 25% and 18%, respectively, higher than those at Chegongzhuang, which could be attributed to different local emissions of primary carbonaceous particles and gaseous precursors of SOC, as well as different summer photochemical intensities between the two locations.

  19. Determination of H2O and CO2 concentrations in fluid inclusions in minerals using laser decrepitation and capacitance manometer analysis

    NASA Technical Reports Server (NTRS)

    Yonover, R. N.; Bourcier, W. L.; Gibson, E. K.

    1985-01-01

    Water and carbon dioxide concentrations within individual and selected groups of fluid inclusions in quartz were analyzed by using laser decrepitation and quantitative capacitance manometer determination. The useful limit of detection (calculated as ten times the typical background level) is about 5 x 10(-10) mol of H2O and 5 x 10(-11) mol of CO2; this H2O content translates into an aqueous fluid inclusion approximately 25 micrometers in diameter. CO2/H2O determinations for 38 samples (100 separate measurements) have a range of H2O amounts of 5.119 x 10(-9) to 1.261 x 10(-7) mol; CO2 amounts of 7.216 x 10(-10) to 1.488 x 10(-8) mol, and CO2/H2O mole ratios of 0.011 to 1.241. Replicate mole ratio determinations of CO2/H2O for three identical (?) clusters of inclusions in quartz have average mole ratios of 0.0305 +/- 0.0041 1 sigma. Our method offers much promise for analysis of individual fluid inclusions, is sensitive, is selective when the laser energy is not so great as to melt the mineral (laser pits approximately 50 micrometers in diameter), and permits rapid analysis (approximately 1 h per sample analysis).

  20. Scale Dependence of Cirrus Horizontal Heterogeneity Effects on TOA Measurements. Part I; MODIS Brightness Temperatures in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Fauchez, Thomas; Platnick, Steven; Meyer, Kerry; Cornet, Celine; Szczap, Frederic; Varnai, Tamas

    2017-01-01

    This paper presents a study on the impact of cirrus cloud heterogeneities on MODIS simulated thermal infrared (TIR) brightness temperatures (BTs) at the top of the atmosphere (TOA) as a function of spatial resolution from 50 meters to 10 kilometers. A realistic 3-D (three-dimensional) cirrus field is generated by the 3DCLOUD model (average optical thickness of 1.4, cloudtop and base altitudes at 10 and 12 kilometers, respectively, consisting of aggregate column crystals of D (sub eff) equals 20 microns), and 3-D thermal infrared radiative transfer (RT) is simulated with the 3DMCPOL (3-D Monte Carlo Polarized) code. According to previous studies, differences between 3-D BT computed from a heterogenous pixel and 1-D (one-dimensional) RT computed from a homogeneous pixel are considered dependent at nadir on two effects: (i) the optical thickness horizontal heterogeneity leading to the plane-parallel homogeneous bias (PPHB); and the (ii) horizontal radiative transport (HRT) leading to the independent pixel approximation error (IPAE). A single but realistic cirrus case is simulated and, as expected, the PPHB mainly impacts the low-spatial resolution results (above approximately 250 meters), with averaged values of up to 5-7 K (thousand), while the IPAE mainly impacts the high-spatial resolution results (below approximately 250 meters) with average values of up to 1-2 K (thousand). A sensitivity study has been performed in order to extend these results to various cirrus optical thicknesses and heterogeneities by sampling the cirrus in several ranges of parameters. For four optical thickness classes and four optical heterogeneity classes, we have found that, for nadir observations, the spatial resolution at which the combination of PPHB and HRT effects is the smallest, falls between 100 and 250 meters. These spatial resolutions thus appear to be the best choice to retrieve cirrus optical properties with the smallest cloud heterogeneity-related total bias in the thermal infrared. For off-nadir observations, the average total effect is increased and the minimum is shifted to coarser spatial resolutions.

  1. In-Situ Data for Microphysical Retrievals: TC4, 2007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, Gerald

    This data set is derived from measurements collected in situ by the NASA DC8 during the Tropical Cloud Climate Composition Coupling Experiment (TC4) that was conducted during July and August, 2007 (Toon et al., 2010). During this experiment the DC8 was based in San Jose, Costa Rica and sampled clouds in the maritime region of the Eastern Pacific and adjoining continental areas. The primary objective of the DC8 during this deployment was to sample ice clouds associated with convective activity. While the vast majority of the data are from ice-phase clouds that have recent association with convection, other types ofmore » clouds such as boundary layer clouds and active convection were also sampled and are represented in this data set. The derived data set, as compiled in this delivery, includes approximately 15,000 5-second averaged measurements collected by the NASA DC8.« less

  2. Atomistic study of two-level systems in amorphous silica

    NASA Astrophysics Data System (ADS)

    Damart, T.; Rodney, D.

    2018-01-01

    Internal friction is analyzed in an atomic-scale model of amorphous silica. The potential energy landscape of more than 100 glasses is explored to identify a sample of about 700 two-level systems (TLSs). We discuss the properties of TLSs, particularly their energy asymmetry and barrier as well as their deformation potential, computed as longitudinal and transverse averages of the full deformation potential tensors. The discrete sampling is used to predict dissipation in the classical regime. Comparison with experimental data shows a better agreement with poorly relaxed thin films than well relaxed vitreous silica, as expected from the large quench rates used to produce numerical glasses. The TLSs are categorized in three types that are shown to affect dissipation in different temperature ranges. The sampling is also used to discuss critically the usual approximations employed in the literature to represent the statistical properties of TLSs.

  3. Prospective Molecular Profiling of Melanoma Metastases Suggests Classifiers of Immune Responsiveness

    PubMed Central

    Wang, Ena; Miller, Lance D.; Ohnmacht, Galen A.; Mocellin, Simone; Perez-Diez, Ainhoa; Petersen, David; Zhao, Yingdong; Simon, Richard; Powell, John I.; Asaki, Esther; Alexander, H. Richard; Duray, Paul H.; Herlyn, Meenhard; Restifo, Nicholas P.; Liu, Edison T.; Rosenberg, Steven A.; Marincola, Francesco M.

    2008-01-01

    We amplified RNAs from 63 fine needle aspiration (FNA) samples from 37 s.c. melanoma metastases from 25 patients undergoing immunotherapy for hybridization to a 6108-gene human cDNA chip. By prospectively following the history of the lesions, we could correlate transcript patterns with clinical outcome. Cluster analysis revealed a tight relationship among autologous synchronously sampled tumors compared with unrelated lesions (average Pearson's r = 0.83 and 0.7, respectively, P < 0.0003). As reported previously, two subgroups of metastatic melanoma lesions were identified that, however, had no predictive correlation with clinical outcome. Ranking of gene expression data from pretreatment samples identified ∼30 genes predictive of clinical response (P < 0.001). Analysis of their annotations denoted that approximately half of them were related to T-cell regulation, suggesting that immune responsiveness might be predetermined by a tumor microenvironment conducive to immune recognition. PMID:12097256

  4. Spectral Absorption Properties of Aerosol Particles from 350-2500nm

    NASA Technical Reports Server (NTRS)

    Martins, J. Vanderlei; Artaxo, Paulo; Kaufman, Yoram J.; Castanho, Andrea D.; Remer, Lorraine A.

    2009-01-01

    The aerosol spectral absorption efficiency (alpha (sub a) in square meters per gram) is measured over an extended wavelength range (350 2500 nm) using an improved calibrated and validated reflectance technique and applied to urban aerosol samples from Sao Paulo, Brazil and from a site in Virginia, Eastern US, that experiences transported urban/industrial aerosol. The average alpha (sub a) values (approximately 3 square meters per gram at 550 nm) for Sao Paulo samples are 10 times larger than alpha (sub a) values obtained for aerosols in Virginia. Sao Paulo aerosols also show evidence of enhanced UV absorption in selected samples, probably associated with organic aerosol components. This extra UV absorption can double the absorption efficiency observed from black carbon alone, therefore reducing by up to 50% the surface UV fluxes, with important implications for climate, UV photolysis rates, and remote sensing from space.

  5. Tracer-Test Planning Using the Efficient Hydrologic Tracer ...

    EPA Pesticide Factsheets

    Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be

  6. EFFICIENT HYDROLOGICAL TRACER-TEST DESIGN (EHTD ...

    EPA Pesticide Factsheets

    Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to

  7. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  8. Project 57 Air Monitoring Report: October 1, 2013, through December 31, 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizell, Steve A.; Nikolich, George; McCurdy, Greg

    On April 24, 1957, the Atomic Energy Commission (AEC, now the Department of Energy [DOE]) conducted the Project 57 safety experiment in western Emigrant Valley north east of the Nevada National Security Site (NNSS, formerly the Nevada Test Site) on lands withdrawn by the Department of Defense (DoD) for the Nevada Test and Training Range (NTTR). The test was undertaken to develop (1) a means of estimating plutonium distribution resulting from a nonnuclear detonation; (2) biomedical evaluation techniques for use in plutonium-laden environments; (3) methods of surface decontamination; and (4) instruments and field procedures for prompt estimation of alpha contaminationmore » (Shreve, 1958). Although the test did not result in the fission of nuclear materials, it did disseminate plutonium across the land surface. Following the experiment, the AEC fenced the contaminated area and returned control of the surrounding land to the DoD. Various radiological surveys have been performed in the area and in 2007, the DOE expanded the demarked contamination area by posting signs 200 to 400 feet (60 to 120 meters) outside of the original fence. Plutonium in soil is thought to attach preferentially to smaller particles. Therefore, redistribution of soil particulates by wind (dust) is the mechanism most likely to transport plutonium beyond the boundary of the Project 57 contamination area. In 2011, DRI installed two instrumentation towers to measure radiological, meteorological, and dust conditions. The monitoring activity was implemented to determine if radionuclide contamination was detectable in samples of airborne dust and characterize meteorological and environmental parameters that influence dust transport. Collected data also permits comparison of radiological conditions at the Project 57 monitoring stations to conditions observed at Community Environmental Monitoring Program (CEMP) stations around the NTTR. Biweekly samples of airborne particulates are submitted for laboratory assessment of gross alpha and gross beta radioactivity and for determination of gamma-emitting radionuclides. Annual average gross alpha values at the Project 57 monitoring stations are in the same range as the highest two values reported for the CEMP stations surrounding the NTTR. Annual average gross beta values at the Project 57 monitoring stations are slightly higher than the lowest value reported for the CEMP stations surrounding the NTTR. Gamma spectroscopy analyses on samples collected from the Project 57 stations identified only naturally occurring radionuclides. No manmade radionuclides were detected. Thermoluminescent dosimeters (TLDs) indicated that the average annual radioactivity dose at the monitoring stations is higher than the dose determined at surrounding CEMP stations but approximately half of the estimated national average dose received by the general public as a result of exposure to natural sources. The TLDs at the Project 57 monitoring stations are exposed to both natural sources (terrestrial and cosmic) and radioactive releases from the Project 57 contamination area. These comparisons show that the gross alpha, gross beta, and gamma spectroscopy levels at the Project 57 monitoring stations are similar to levels observed at the CEMP stations but that the average annual dose rate is higher than at the CEMP stations. Winds in excess of approximately 15 mph begin to generate dust movement by saltation (migration of sand at the ground surface) or direct suspension in the air. Saltated sand, PM10 (inhalable) dust, and PM2.5 (fine particulate dust) exhibit an approximately exponential increase with increasing wind speed. The greatest concentrations of dust occur for winds exceeding 20 mph. During the reporting period, winds in excess of 20 mph occurred approximately 1.6 percent of the time. Preliminary assessment of individual wind events suggests that dust generation is highly variable likely because of the influence of other meteorological and environmental parameters. Although winds sufficient to generate significant amounts of dust occur at the Project 57 site, they are infrequent and of short duration. Additionally, the potential for wind transport of dust is dependent on other parameters whose influence have not yet been assessed.« less

  9. Indoor, outdoor, and regional summer and winter concentrations of PM10, PM2.5, SO4(2)-, H+, NH4+, NO3-, NH3, and nitrous acid in homes with and without kerosene space heaters.

    PubMed Central

    Leaderer, B P; Naeher, L; Jankun, T; Balenger, K; Holford, T R; Toth, C; Sullivan, J; Wolfson, J M; Koutrakis, P

    1999-01-01

    Twenty-four-hour samples of PM10 (mass of particles with aerodynamic diameter < or = 10 microm), PM2.5, (mass of particles with aerodynamic diameter < or = 2.5 microm), particle strong acidity (H+), sulfate (SO42-), nitrate (NO3-), ammonia (NH3), nitrous acid (HONO), and sulfur dioxide were collected inside and outside of 281 homes during winter and summer periods. Measurements were also conducted during summer periods at a regional site. A total of 58 homes of nonsmokers were sampled during the summer periods and 223 homes were sampled during the winter periods. Seventy-four of the homes sampled during the winter reported the use of a kerosene heater. All homes sampled in the summer were located in southwest Virginia. All but 20 homes sampled in the winter were also located in southwest Virginia; the remainder of the homes were located in Connecticut. For homes without tobacco combustion, the regional air monitoring site (Vinton, VA) appeared to provide a reasonable estimate of concentrations of PM2.5 and SO42- during summer months outside and inside homes within the region, even when a substantial number of the homes used air conditioning. Average indoor/outdoor ratios for PM2.5 and SO42- during the summer period were 1.03 +/- 0.71 and 0.74 +/- 0.53, respectively. The indoor/outdoor mean ratio for sulfate suggests that on average approximately 75% of the fine aerosol indoors during the summer is associated with outdoor sources. Kerosene heater use during the winter months, in the absence of tobacco combustion, results in substantial increases in indoor concentrations of PM2.5, SO42-, and possibly H+, as compared to homes without kerosene heaters. During their use, we estimated that kerosene heaters added, on average, approximately 40 microg/m3 of PM2.5 and 15 microg/m3 of SO42- to background residential levels of 18 and 2 microg/m3, respectively. Results from using sulfuric acid-doped Teflon (E.I. Du Pont de Nemours & Co., Wilmington, DE) filters in homes with kerosene heaters suggest that acid particle concentrations may be substantially higher than those measured because of acid neutralization by ammonia. During the summer and winter periods indoor concentrations of ammonia are an order of magnitude higher indoors than outdoors and appear to result in lower indoor acid particle concentrations. Nitrous acid levels are higher indoors than outdoors during both winter and summer and are substantially higher in homes with unvented combustion sources. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 PMID:10064553

  10. Uniformly high-order accurate non-oscillatory schemes, 1

    NASA Technical Reports Server (NTRS)

    Harten, A.; Osher, S.

    1985-01-01

    The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws was begun. These schemes share many desirable properties with total variation diminishing schemes (TVD), but TVD schemes have at most first order accuracy, in the sense of truncation error, at extreme of the solution. A uniformly second order approximation was constucted, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.

  11. Numerical solution of the unsteady Navier-Stokes equation

    NASA Technical Reports Server (NTRS)

    Osher, Stanley J.; Engquist, Bjoern

    1985-01-01

    The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws are discussed. These schemes share many desirable properties with total variation diminishing schemes, but TVD schemes have at most first-order accuracy, in the sense of truncation error, at extrema of the solution. In this paper a uniformly second-order approximation is constructed, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.

  12. Characterization of cyanophyte biomass in a Bureau of Reclamation reservoir

    USGS Publications Warehouse

    Simon, Nancy S.; Ali, Ahmad Abdul; Samperton, Kyle Michael; Korson, Charles S.; Fischer, Kris; Hughes, Michael L.

    2013-01-01

    The purpose of this study was to characterize the cyanophyte Aphanizomenon flos-aquae (AFA) from Upper Klamath Lake, Oregon, (UKL) and, based on this description, explore uses for AFA, which would have commercial value. AFA collected from UKL in 2010 from eight sites during a period of approximately 2 weeks were similar in composition spatially and temporally. 31P nuclear magnetic resonance analysis of the samples indicated that the AFA samples contained a broad range of phosphorus-containing compounds. The largest variation in organic phosphorus compounds was found in a sample collected from Howard Bay compared with samples collected the sites at Pelican Marina, North Buck Island, Eagle Ridge, Eagle Ridge South, Shoalwater Bay, and Agency Lake South. 31P Nuclear Magnetic Resonance data indicated that the average ratio of inorganic phosphorus (orthophosphate) to organic phosphorus in the AFA samples was approximately 60:40 in extraction solutions of either water or a more rigorous solution of sodium hydroxide plus ethylenediaminetetraacetic acid. This indicates that when AFA cells senesce, die and lyse, cell contents added to the water column contain a broad spectrum of phosphorus-containing compounds approximately 50 percent of which are organic phosphorus compounds. The organic phosphorus content of AFA is directly and significantly related to the total carbon content of AFA. Total concentrations of the elements Al, Ca, Fe, Mg, Ti and Zn were similar in all samples with the exception of elevated iron in the July 27, 2010, sample from Pelican Marina. Iron concentration in the July 27, 2010, Pelican Marina sample was elevated; the concentration of iron in the August 9, 2010, sample from Pelican Marina was indistinguishable from iron in the other AFA samples that were collected. The carbon to nitrogen ratio in all AFA samples that were analyzed was 5.4 plus or minus 0.04 as compared with the Redfield ratio of carbon to nitrogen ratio of 6.6, which could be attributed to the large concentrations of nitrogen (protein) in AFA or to optimal growth rate. In UKL there is a concern that microcystin, the toxin produced by microcystis, might be present in what appears to be predominantly AFA in the lake water. Experiments preformed as part of this study identified a process that reduces the toxicity of microcystin when it is present in water slurry containing AFA. The process combines (1) the inhibition of the α, ß-unsaturated carbonyl in microcystin with (2) the breakdown of proteins in AFA using the protease activity of plant enzymes. Protease enzymes can break peptide bonds in microcystin, which results in destruction of the cyclic structure of the microcystin polypeptide. Laboratory conditions used in this study resulted in the inactivation of approximately 60 percent of the activity of microcystin.

  13. Determinants of mobile phone output power in a multinational study: implications for exposure assessment.

    PubMed

    Vrijheid, M; Mann, S; Vecchia, P; Wiart, J; Taki, M; Ardoino, L; Armstrong, B K; Auvinen, A; Bédard, D; Berg-Beckhoff, G; Brown, J; Chetrit, A; Collatz-Christensen, H; Combalot, E; Cook, A; Deltour, I; Feychting, M; Giles, G G; Hepworth, S J; Hours, M; Iavarone, I; Johansen, C; Krewski, D; Kurttio, P; Lagorio, S; Lönn, S; McBride, M; Montestrucq, L; Parslow, R C; Sadetzki, S; Schüz, J; Tynes, T; Woodward, A; Cardis, E

    2009-10-01

    The output power of a mobile phone is directly related to its radiofrequency (RF) electromagnetic field strength, and may theoretically vary substantially in different networks and phone use circumstances due to power control technologies. To improve indices of RF exposure for epidemiological studies, we assessed determinants of mobile phone output power in a multinational study. More than 500 volunteers in 12 countries used Global System for Mobile communications software-modified phones (GSM SMPs) for approximately 1 month each. The SMPs recorded date, time, and duration of each call, and the frequency band and output power at fixed sampling intervals throughout each call. Questionnaires provided information on the typical circumstances of an individual's phone use. Linear regression models were used to analyse the influence of possible explanatory variables on the average output power and the percentage call time at maximum power for each call. Measurements of over 60,000 phone calls showed that the average output power was approximately 50% of the maximum, and that output power varied by a factor of up to 2 to 3 between study centres and network operators. Maximum power was used during a considerable proportion of call time (39% on average). Output power decreased with increasing call duration, but showed little variation in relation to reported frequency of use while in a moving vehicle or inside buildings. Higher output powers for rural compared with urban use of the SMP were observed principally in Sweden where the study covered very sparsely populated areas. Average power levels are substantially higher than the minimum levels theoretically achievable in GSM networks. Exposure indices could be improved by accounting for average power levels of different telecommunications systems. There appears to be little value in gathering information on circumstances of phone use other than use in very sparsely populated regions.

  14. Conformer lifetimes of ethyl cyanoformate from exchange-averaged rotational spectra.

    PubMed

    True, Nancy S

    2009-06-25

    Ethyl cyanoformate exists as a mixture of two conformers but displays three R-branch a-type band series in its rotational spectrum. Simulations with population fractions 0.37 at 210 K and 0.70 at 297 K undergoing conformer exchange with average conformer lifetimes, , shorter than approximately 40 ps at approximately 210 K and shorter than approximately 37 ps at 297 K reproduce the experimental spectra between 26.5 and 38 GHz, the exchanging species accounting for the third set of bands. The upper-limit 's are 1 order of magnitude longer than RRKM theory predictions and the population fractions are consistent with the total population with energy above 700 cm(-1), approximately twice the conformer interconversion barrier height. Model calculations indicate that extensive K-sublevel mixing in individual molecular eigenstates can produce the large population and the narrow distribution of the rotational-constant sum, B + C, consistent with the observed exchange-averaged band series.

  15. Exposure of miners to diesel exhaust particulates in underground nonmetal mines.

    PubMed

    Cohen, H J; Borak, J; Hall, T; Sirianni, G; Chemerynski, S

    2002-01-01

    A study was initiated to examine worker exposures in seven underground nonmetal mines and to examine the precision of the National Institute for Occupational Safety and Health (NIOSH) 5040 sampling and analytical method for diesel exhaust that has recently been adopted for compliance monitoring by the Mine Safety and Health Administration (MSHA). Approximately 1000 air samples using cyclones were taken on workers and in areas throughout the mines. Results indicated that worker exposures were consistently above the MSHA final limit of 160 micrograms/m3 (time-weighted average; TWA) for total carbon as determined by the NIOSH 5040 method and greater than the proposed American Conference of Governmental Industrial Hygienists TLV limit of 20 micrograms/m3 (TWA) for elemental carbon. A number of difficulties were documented when sampling for diesel exhaust using organic carbon: high and variable blank values from filters, a high variability (+/- 20%) from duplicate punches from the same sampling filter, a consistent positive interference (+26%) when open-faced monitors were sampled side-by-side with cyclones, poor correlation (r 2 = 0.38) to elemental carbon levels, and an interference from limestone that could not be adequately corrected by acid-washing of filters. The sampling and analytical precision (relative standard deviation) was approximately 11% for elemental carbon, 17% for organic carbon, and 11% for total carbon. An hypothesis is presented and supported with data that gaseous organic carbon constituents of diesel exhaust adsorb onto not only the submicron elemental carbon particles found in diesel exhaust, but also mining ore dusts. Such mining dusts are mostly nonrespirable and should not be considered equivalent to submicron diesel particulates in their potential for adverse pulmonary effects. It is recommended that size-selective sampling be employed, rather than open-faced monitoring, when using the NIOSH 5040 method.

  16. Progression of ash canopy thinning and dieback outward from the initial infestation of emerald ash borer (Coleoptera: Buprestidae) in southeastern Michigan.

    PubMed

    Smitley, David; Davis, Terrance; Rebek, Eric

    2008-10-01

    Our objective was to characterize the rate at which ash (Fraxinus spp.) trees decline in areas adjacent to the leading edge of visible ash canopy thinning due to emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae). Trees in southeastern Michigan were surveyed from 2003 to 2006 for canopy thinning and dieback by comparing survey trees with a set of 11 standard photographs. Freeways stemming from Detroit in all directions were used as survey transects. Between 750 and 1,100 trees were surveyed each year. A rapid method of sampling populations of emerald ash borer was developed by counting emerald ash borer emergence holes with binoculars and then felling trees to validate binocular counts. Approximately 25% of the trees surveyed for canopy thinning in 2005 and 2006 also were sampled for emerald ash borer emergence holes using binoculars. Regression analysis indicates that 41-53% of the variation in ash canopy thinning can be explained by the number of emerald ash borer emergence holes per tree. Emerald ash borer emergence holes were found at every site where ash canopy thinning averaged > 40%. In 2003, ash canopy thinning averaged 40% at a distance of 19.3 km from the epicenter of the emerald ash borer infestation in Canton. By 2006, the point at which ash trees averaged 40% canopy thinning had increased to a distance of 51.2 km away from Canton. Therefore, the point at which ash trees averaged 40% canopy thinning, a state of decline clearly visible to the average person, moved outward at a rate of 10.6 km/yr during this period.

  17. Radar investigation of asteroids

    NASA Technical Reports Server (NTRS)

    Ostro, S. J.

    1983-01-01

    For 80 Sappho, 356 Liguria, 694 Ekard, and 2340 Hathor, data were taken simultaneously in the same sense of circular polarization as transmitted (SC) as well as in the opposite (OC) sense. Graphs show the average OC and SC radar echo power spectra soothed to a resolution of EFB Hz and plotted against Doppler frequency. Radar observations of the peculiar object 2201 Oljato reveal an unusual set of echo power spectra. The albedo and polarization ratio remain fairly constant but the bandwidths range from approximately 0.8 Hz to 1.4 Hz and the spectral shapes vary dramatically. Echo characteristics within any one date's approximately 2.5-hr observation period do not fluctuate very much. Laboratory measurements of the radar frequency electrical properties of particulate metal-plus-silicate mixtures can be combined with radar albedo estimates to constrain the bulk density and metal weight, fraction in a hypothetical asteroid regolith having the same particle size distribution as lab samples.

  18. Landauer-Büttiker and Thouless Conductance

    NASA Astrophysics Data System (ADS)

    Bruneau, L.; Jakšić, V.; Last, Y.; Pillet, C.-A.

    2015-08-01

    In the independent electron approximation, the average (energy/charge/entropy) current flowing through a finite sample connected to two electronic reservoirs can be computed by scattering theoretic arguments which lead to the famous Landauer-Büttiker formula. Another well known formula has been proposed by Thouless on the basis of a scaling argument. The Thouless formula relates the conductance of the sample to the width of the spectral bands of the infinite crystal obtained by periodic juxtaposition of . In this spirit, we define Landauer-Büttiker crystalline currents by extending the Landauer-Büttiker formula to a setup where the sample is replaced by a periodic structure whose unit cell is . We argue that these crystalline currents are closely related to the Thouless currents. For example, the crystalline heat current is bounded above by the Thouless heat current, and this bound saturates iff the coupling between the reservoirs and the sample is reflectionless. Our analysis leads to a rigorous derivation of the Thouless formula from the first principles of quantum statistical mechanics.

  19. Field-based evaluations of sampling techniques to support long-term monitoring of riparian ecosystems along wadeable streams on the Colorado Plateau

    USGS Publications Warehouse

    Scott, Michael L.; Reynolds, Elizabeth W.

    2007-01-01

    Compared to 5-m by 20-m tree quadrats, belt transects were shown to provide similar estimates of stand structure (stem density and stand basal area) in less than 30 percent of the time. Further, for the streams sampled, there were no statistically significant differences in stem density and basal area estimates between 10-m and 20-m belt transects and the smaller belts took approximately half the time to sample. There was, however, high variance associated with estimates of stand structure for infrequently occurring stems, such as large, relict or legacy riparian trees. Legacy riparian trees occurred in limited numbers at all sites sampled. A reachscale population census of these trees indicated that the 10-m belt transects tended to underestimate both stem density and basal area for these riparian forest elements and that a complete reach-scale census of legacy trees averaged less than one hour per site.

  20. The Third SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-3)

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B.; VanHeukelem, Laurei; Thomas, Crystal S.; Claustre, Herve; Ras, Josephine; Schluter, Louise; Clementson, Lesley; vanderLinde, Dirk; Eker-Develi, Elif; Berthon, Jean-Francois; hide

    2009-01-01

    Seven international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a mixed pigment sample. The field samples were collected primarily from oligotrophic waters, although mesotrophic and eutrophic waters were also sampled to create a dynamic range in chlorophyll concentration spanning approximately two orders of magnitude (0.020 1.366 mg m^{-3}) The intercomparisons were used to establish the following: a) the uncertainties in quantitating individual pigments and higher-order variables (sums, ratios, and indices); b) the reduction in uncertainties as a result of applying quality assurance (QA) procedures; c) the importance of establishing a properly defined referencing system in the computation of uncertainties; d) the analytical benefits of performance metrics, and e) the utility of a laboratory mix in understanding method performance. In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied.

  1. Radioelements and their occurrence with secondary minerals in heated and unheated tuff at the Nevada Test Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flexser, S.; Wollenberg, H.A.

    1992-06-01

    Samples of devitrified welded tuff near and away from the site of a heater test in Rainier Mesa were examined with regard to whole-rock radioelement abundances, microscopic distribution of U, and oxygen isotope ratios. Wholerock U averages between 4 and 5 ppM, and U is concentrated at higher levels secondary opaque minerals as well as in accessory grains. U in primary and secondary sites is most commonly associated with Mn phases, which average {approximately}30 ppM U in more uraniferous occurrences. This average is consistent and apparently unaffected by proximity to the heater. The Mn phases differ compositionally from Mn mineralsmore » in other NTS tuffs, usually containing abundant Fe, Ti, and sometimes Ce, and are often poorly crystalline. Oxygen isotope ratios show some depletion in {delta}{sup 18}O in tuff samples very close to the heater; this depletion is consistent with isotopic exchange between the tuff and interstitial water, but it may also reflect original heterogeneity in isotopic ratios of the tuff unrelated to the heater test. Seismic properties of several tuff samples were measured. Significant differences correlating with distance from the heater occur in P- and S-wave amplitudes; these may be due to loss of bound water. Seismic velocities are nearly constant and indicate a lack of significant microcracking. The absence of clearer signs of heater-induced U mobilization or isotopic variations may be due to the short duration of the heater test, and to insufficient definition of pre-heater-test heterogeneities in the tuff.« less

  2. Rugometric and microtopographic non-invasive inspection in dental-resin composites and zirconia ceramics

    NASA Astrophysics Data System (ADS)

    Fernández-Oliveras, Alicia; Costa, Manuel F. M.; Pecho, Oscar E.; Rubiño, Manuel; Pérez, María. M.

    2013-11-01

    Surface properties are essential for a complete characterization of biomaterials. In restorative dentistry, the study of the surface properties of materials meant to replace dental tissues in an irreversibly diseased tooth is important to avoid harmful changes in future treatments. We have experimentally analyzed the surface characterization parameters of two different types of dental-resin composites and pre-sintered and sintered zirconia ceramics. We studied two shades of both composite types and two sintered zirconia ceramics: colored and uncolored. Moreover, a surface treatment was applied to one specimen of each dental-resin. All the samples were submitted to rugometric and microtopographic non-invasive inspection with the MICROTOP.06.MFC laser microtopographer in order to gather meaningful statistical parameters such as the average roughness (Ra), the root-mean-square deviation (Rq), the skewness (Rsk), and the kurtosis of the surface height distribution (Rku). For a comparison of the different biomaterials, the uncertainties associated to the surface parameters were also determined. With respect to Ra and Rq, significant differences between the composite shades were found. Among the dental resins, the nanocomposite presented the highest values and, for the zirconia ceramics, the pre-sintered sample registered the lowest ones. The composite performance may have been due to cluster-formation variations. Except for the composites with the surface treatment, the sample surfaces had approximately a normal distribution of heights. The surface treatment applied to the composites increased the average roughness and moved the height distribution farther away from the normal distribution. The zirconia-sintering process resulted in higher average roughness without affecting the height distribution.

  3. Study on coal char ignition by radiant heat flux.

    NASA Astrophysics Data System (ADS)

    Korotkikh, A. G.; Slyusarskiy, K. V.

    2017-11-01

    The study on coal char ignition by CO2-continuous laser was carried out. The coal char samples of T-grade bituminous coal and 2B-grade lignite were studied via CO2-laser ignition setup. Ignition delay times were determined at ambient condition in heat flux density range 90-200 W/cm2. The average ignition delay time value for lignite samples were 2 times lower while this difference is larger in high heat flux region and lower in low heat flux region. The kinetic constants for overall oxidation reaction were determined using analytic solution of simplified one-dimensional heat transfer equation with radiant heat transfer boundary condition. The activation energy for lignite char was found to be less than it is for bituminous coal char by approximately 20 %.

  4. Free energy landscape from path-sampling: application to the structural transition in LJ38

    NASA Astrophysics Data System (ADS)

    Adjanor, G.; Athènes, M.; Calvo, F.

    2006-09-01

    We introduce a path-sampling scheme that allows equilibrium state-ensemble averages to be computed by means of a biased distribution of non-equilibrium paths. This non-equilibrium method is applied to the case of the 38-atom Lennard-Jones atomic cluster, which has a double-funnel energy landscape. We calculate the free energy profile along the Q4 bond orientational order parameter. At high or moderate temperature the results obtained using the non-equilibrium approach are consistent with those obtained using conventional equilibrium methods, including parallel tempering and Wang-Landau Monte Carlo simulations. At lower temperatures, the non-equilibrium approach becomes more efficient in exploring the relevant inherent structures. In particular, the free energy agrees with the predictions of the harmonic superposition approximation.

  5. Early harvest and ensilage of forage sorghum infected with ergot (Claviceps africana) reduces the risk of livestock poisoning.

    PubMed

    Blaney, B J; Ryley, M J; Boucher, B D

    2010-08-01

    Sorghum ergot produces dihydroergosine (DHES) and related alkaloids, which cause hyperthermia in cattle. Proportions of infected panicles (grain heads), leaves and stems were determined in two forage sorghum crops extensively infected 2 to 4 weeks prior to sampling and the panicles were assayed for DHES. Composite samples from each crop, plus a third grain variety crop, were coarsely chopped and half of each sealed in plastic buckets for 6 weeks to simulate ensilation. The worst-infected panicles contained up to 55 mg DHES/kg, but dilution reduced average concentrations of DHES in crops to approximately 1 mg/kg, a relatively safe level for cattle. Ensilation significantly (P = 0.043) reduced mean DHES concentrations from 0.85 to 0.46 mg/kg.

  6. Detection of ɛ-ergodicity breaking in experimental data—A study of the dynamical functional sensibility

    NASA Astrophysics Data System (ADS)

    Loch-Olszewska, Hanna; Szwabiński, Janusz

    2018-05-01

    The ergodicity breaking phenomenon has already been in the area of interest of many scientists, who tried to uncover its biological and chemical origins. Unfortunately, testing ergodicity in real-life data can be challenging, as sample paths are often too short for approximating their asymptotic behaviour. In this paper, the authors analyze the minimal lengths of empirical trajectories needed for claiming the ɛ-ergodicity based on two commonly used variants of an autoregressive fractionally integrated moving average model. The dependence of the dynamical functional on the parameters of the process is studied. The problem of choosing proper ɛ for ɛ-ergodicity testing is discussed with respect to especially the variation of the innovation process and the data sample length, with a presentation on two real-life examples.

  7. Detection of ε-ergodicity breaking in experimental data-A study of the dynamical functional sensibility.

    PubMed

    Loch-Olszewska, Hanna; Szwabiński, Janusz

    2018-05-28

    The ergodicity breaking phenomenon has already been in the area of interest of many scientists, who tried to uncover its biological and chemical origins. Unfortunately, testing ergodicity in real-life data can be challenging, as sample paths are often too short for approximating their asymptotic behaviour. In this paper, the authors analyze the minimal lengths of empirical trajectories needed for claiming the ε-ergodicity based on two commonly used variants of an autoregressive fractionally integrated moving average model. The dependence of the dynamical functional on the parameters of the process is studied. The problem of choosing proper ε for ε-ergodicity testing is discussed with respect to especially the variation of the innovation process and the data sample length, with a presentation on two real-life examples.

  8. A comparative analysis of double-crested cormorant diets from stomachs and pellets from two Lake Ontario colonies

    USGS Publications Warehouse

    Johnson, James H.; Ross, Robert M.; McCullough, Russell D.; Mathers, Alastair

    2010-01-01

    Double-crested cormorant (Phalacrocorax auritus) diets were compared with evidence from the stomachs of shot birds and from regurgitated pellets at High Bluff Island and Little Galloo Island, Lake Ontario. The highest similarity in diets determined by stomach and pellet analyses occurred when both samples were collected on the same day. Diet overlap dropped substantially between the two methods when collection periods were seven to ten days apart, which suggested differences in prey availability between the two periods. Since the average number of fish recovered in pellets was significantly higher than that in stomachs, use of pellets to determine fish consumption of double-crested cormorants may be more valid than stomach analysis because pellet content represent an integrated sampling of food consumed over approximately 24 hours.

  9. Simultaneous in situ electron temperature comparisons using Alouette 2 probe and plasma resonance data

    NASA Technical Reports Server (NTRS)

    Benson, R. F.

    1973-01-01

    The electron temperatures deduced from Alouette 2 diffuse resonance observations are compared with the temperature obtained from the Alouette 2 cylindrical electrostatic probe experiment using data from 5 mid-to-high latitude telemetry stations. The probe temperature is consistently higher than the diffuse resonance temperature. The average difference ranged from approximately 10% to 40% with the lower values occurring at the lowest altitudes sampled (near 500 km) and at high latitudes (dip latitude greater than 55 deg), and the larger values occurring at high altitudes and lower latitudes. The discrepancy appears to be of geophysical origin since it is dependent on the location of the data sample. The present observations support the view that the often observed radar backscatter - probe electron temperature discrepancy is also of geophysical origin.

  10. Software algorithm and hardware design for real-time implementation of new spectral estimator

    PubMed Central

    2014-01-01

    Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214

  11. Microstructure of the irradiated U 3Si 2/Al silicide dispersion fuel

    NASA Astrophysics Data System (ADS)

    Gan, J.; Keiser, D. D.; Miller, B. D.; Jue, J.-F.; Robinson, A. B.; Madden, J. W.; Medvedev, P. G.; Wachs, D. M.

    2011-12-01

    The silicide dispersion fuel of U 3Si 2/Al is recognized as the best performance fuel for many nuclear research and test reactors with up to 4.8 gU/cm 3 fuel loading. An irradiated U 3Si 2/Al dispersion fuel ( 235U ˜ 75%) from the high-flux side of a fuel plate (U0R040) from the Reduced Enrichment for Research and Test Reactors (RERTR)-8 test was characterized using transmission electron microscopy (TEM). The fuel was irradiated in the Advanced Test Reactor (ATR) for 105 days. The average irradiation temperature and fission density of the U 3Si 2 fuel particles for the TEM sample are estimated to be approximately 110 °C and 5.4 × 10 27 f/m 3. The characterization was performed using a 200-kV TEM. The U/Si ratio for the fuel particle and (Si + Al)/U for the fuel-matrix-interaction layer are approximately 1.1 and 4-10, respectively. The estimated average diameter, number density and volume fraction for small bubbles (<1 μm) in the fuel particle are ˜94 nm, 1.05 × 10 20 m -3 and ˜11%, respectively. The results and their implication on the performance of the U 3Si 2/Al silicide dispersion fuel are discussed.

  12. Energy demands in competitive soccer.

    PubMed

    Bangsbo, J

    1994-01-01

    In elite outfield players, the average work rate during a soccer match, as estimated from variables such as heart rate, is approximately 70% of maximal oxygen uptake (VO2 max). This corresponds to an energy production of approximately 5700 kJ (1360 kcal) for a person weighing 75 kg with a VO2 max of 60 ml kg-1 min-1. Aerobic energy production appears to account for more than 90% of total energy consumption. Nevertheless, anaerobic energy production plays an essential role during soccer matches. During intensive exercise periods of a game, creatine phosphate, and to a lesser extent the stored adenosine triphosphate, are utilized. Both compounds are partly restored during a subsequent prolonged rest period. In blood samples taken after top-class soccer matches, the lactate concentration averages 3-9 mM, and individual values frequently exceed 10 mM during match-play. Furthermore, the adenosine diphosphate degradation products--ammonia/ammonium, hypoxanthine and uric acid--are elevated in the blood during soccer matches. Thus, the anaerobic energy systems are heavily taxes during periods of match-play. Glycogen in the working muscle seems to be the most important substrate for energy production during soccer matches. However, muscle triglycerides, blood free fatty acids and glucose are also used as substrates for oxidative metabolism in the muscles.

  13. Sub-1GHz wireless sensing and control instruments for green house farming system

    NASA Astrophysics Data System (ADS)

    Wardana, I. N. K.; Ciptayani, P. I.; Suranata, I. W. A.

    2018-01-01

    Radio frequency enabled devices was developed to make the data gathering and instruments control process become wirelessly possible for greenhouse. This research used 915 MHz radio frequency band, which is also known as ISM (industrial, scientific, and medical) band. To accomplish the experiments, three main devices was developed. They are node sensors (NoSe), node actuators (NoAc), and gateway. According to communication range test, the devices can transmit flawlessly up to 43 meters in harsh environment (Non-Line of Sight or Non-LoS). The result was increased dramatically in an open field (Line of Sight or LoS ) with maximum range that can be achieved is up to 280 meters. The RSSI (Received Signal Strength Indication) for LoS and Non-LoS measurements were recorded. The number of transmitted data was approximately 500 samples and transferred approximately every 200 ms. In Non-Los scenario, RSSI ranged from -74 dB to -96 dB with average -82 dB. The better performance was shown in LoS measurement that is RSSI varied from -67 dB to -89 dB with average -76 dB. Based on that results, this technology have a great prospect as an option to greenhouse wireless sensing and controlling technology.

  14. Simultaneous measurement of acoustic and streaming velocities in a standing wave using laser Doppler anemometry.

    PubMed

    Thompson, Michael W; Atchley, Anthony A

    2005-04-01

    Laser Doppler anemometry (LDA) with burst spectrum analysis (BSA) is used to study the acoustic streaming generated in a cylindrical standing-wave resonator filled with air. The air column is driven sinusoidally at a frequency of approximately 310 Hz and the resultant acoustic-velocity amplitudes are less than 1.3 m/s at the velocity antinodes. The axial component of fluid velocity is measured along the resonator axis, across the diameter, and as a function of acoustic amplitude. The velocity signals are postprocessed using the Fourier averaging method [Sonnenberger et al., Exp. Fluids 28, 217-224 (2000)]. Equations are derived for determining the uncertainties in the resultant Fourier coefficients. The time-averaged velocity-signal components are seen to be contaminated by significant errors due to the LDA/BSA system. In order to avoid these errors, the Lagrangian streaming velocities are determined using the time-harmonic signal components and the arrival times of the velocity samples. The observed Lagrangian streaming velocities are consistent with Rott's theory [N. Rott, Z. Angew. Math. Phys. 25, 417-421 (1974)], indicating that the dependence of viscosity on temperature is important. The onset of streaming is observed to occur within approximately 5 s after switching on the acoustic field.

  15. Companions to isolated elliptical galaxies: revisiting the Bothun-Sullivan (1977) sample using the NASA/IPAC extragalactic database

    NASA Technical Reports Server (NTRS)

    Madore, B. F.; Freedman, W. L.; Bothun, G. D.

    2002-01-01

    We investigate the number of physical companion galaxies for a sample of relatively isolated elliptical galaxies. The NASA/IPAC Extragalactic Database (NED) has been usedto reinvestigate the incidence of satellite galaxies for a sample of 34 elliptical galaxies, firstinvestigated by Bothun & Sullivan (1977) using a visual inspection of Palomar Sky Survey prints out to a projected search radius of 75 kpc. We have repeated their original investigation usingdata cataloged data in NED. Nine of these ellipticals appear to be members of galaxy clusters:the remaining sample of 25 galaxies reveals an average of +1.0 f 0.5 apparent companions per galaxy within a projected search radius of 75 kpc, in excess of two equal-area comparisonregions displaced by 150-300 kpc. This is nearly an order of magnitude larger than the +0.12+/- 0.42 companions/galaxy found by Bothun & Sullivan for the identical sample. Making use of published radial velocities, mostly available since the completion of the Bothun-Sullivan study,identifies the physical companions and gives a somewhat lower estimate of +0.4 companions per elliptical. This is still a factor of 3x larger than the original statistical study, but giventhe incomplete and heterogeneous nature of the survey redshifts in NED, it still yields a firmlower limit on the number (and identity) of physical companions. An expansion of the searchradius out to 300 kpc, again restricted to sampling only those objects with known redshifts in NED, gives another lower limit of 4.3 physical companions per galaxy. (Excluding fiveelliptical galaxies in the Fornax cluster this average drops to 3.5 companions per elliptical.)These physical companions are individually identified and listed, and the ensemble-averagedradial density distribution of these associated galaxies is presented. For the ensemble, the radial density distribution is found to have a fall-off consistent with p c( R^-0.5 out to approximately150 kpc. For non-Fornax cluster companions the fall-off continues out to the 300-kpc limit of thesurvey. The velocity dispersion of these companions is found to be constant with projected radial distance from the central elliptical, holding at a value of approximately +/- 300-350 km/sec overall.

  16. A filament of energetic particles near the high-latitude dawn magnetopause

    NASA Technical Reports Server (NTRS)

    Lui, A. T. Y.; Williams, D. J.; Mcentire, R. W.; Christon, S. P.; Jacquey, C.; Angelopoulos, V.; Yamamoto, T.; Kokubun, S.; Frank, L. A.; Ackerson, K. L.

    1994-01-01

    The Geotail satelite detected a filament of tailward-streaming energetic particles spatially separated from the boundary layer of energetic particles at the high-latitude dawn magnetopause at a downstream distance of approximately 80 R(sub E) on October 27, 1992. During this event, the composition and charge states of energetic ions at energies above approximately 10 keV show significant intermix of ions from solar wind and ionospheric sources. Detailed analysis leads to the deduction that the filament was moving southward towards the neutral sheet at an average speed of approximately 80 km/s, implying an average duskward electric field of approximately 1 mV/m. Its north-south dimension was approximately 1 R(sub E) and it was associated with an earthward directed field-aligned current of approximately 5 mA/m. The filament was separated from the energetic particle boundary layer straddling the magnetopause by approximately 0.8 R(sub E) and was inferred to be detached from the boundary layer at downstream distance beyond approximately 70 R(sub E) in the distant tail.

  17. Numerical investigation of the relationship between magnetic stiffness and minor loop size in the HTS levitation system

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Li, Chengshan

    2017-10-01

    The effect of minor loop size on the magnetic stiffness has not been paid attention to by most researchers in experimental and theoretical studies about the high temperature superconductor (HTS) magnetic levitation system. In this work, we numerically investigate the average magnetic stiffness obtained by the minor loop traverses Δz (or Δx) varying from 0.1 mm to 2 mm in zero field cooling and field cooling regimes, respectively. The approximate values of the magnetic stiffness with zero traverse are obtained using the method of linear extrapolation. Compared with the average magnetic stiffness gained by any minor loop traverse, these approximate values are Not always close to the average magnetic stiffness produced by the smallest size of minor loops. The relative deviation ranges of average magnetic stiffness gained by the usually minor loop traverse (1 or 2 mm) are presented by the ratios of approximate values to average stiffness for different moving processes and two typical cooling conditions. The results show that most of average magnetic stiffness are remarkably influenced by the sizes of minor loop, which indicates that the magnetic stiffness obtained by a single minor loop traverse Δ z or Δ x, for example, 1 or 2 mm, can be generally caused a large deviation.

  18. CS and IOS approximations for fine structure transitions in Na(/sup 2/P)--He(/sup 1/S) collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitz, D.E.; Kouri, D.J.

    1980-11-15

    The l-average CS and IOS approximations are extended to treat fine structure transitions in /sup 2/P atom--/sup 1/S atom scattering. Calculations of degeneracy averaged probabilities and differential cross sections for Na(/sup 2/P)+He(/sup 1/S) collisions in the CS and IOS methods agree well with the CC results. The present nonunitarized form of the CS approximation fails to properly predict all of the jm..-->..j'm' sections and in particular leads to a selection rule forbidding jm..-->..j--m transitions for j=half-odd integer values.

  19. The effect of newly induced mutations on the fitness of genotypes and populations of yeast (Saccharomyces cerevisiae).

    PubMed

    Orthen, E; Lange, P; Wöhrmann, K

    1984-12-01

    This paper analyses the fate of artificially induced mutations and their importance to the fitness of populations of the yeast, Saccharomyces cerevisiae, an increasingly important model organism in population genetics. Diploid strains, treated with UV and EMS, were cultured asexually for approximately 540 generations and under conditions where the asexual growth was interrupted by a sexual phase. Growth rates of 100 randomly sampled diploid clones were estimated at the beginning and at the end of the experiment. After the induction of sporulation the growth rates of 100 randomly sampled spores were measured. UV and EMS treatment decreases the average growth rate of the clones significantly but increases the variability in comparison to the untreated control. After selection over approximately 540 generations, variability in growth rates was reduced to that of the untreated control. No increase in mean population fitness was observed. However, the results show that after selection there still exists a large amount of hidden genetic variability in the populations which is revealed when the clones are cultivated in environments other than those in which selection took place. A sexual phase increased the reduction of the induced variability.

  20. An expanded mammal mitogenome dataset from Southeast Asia

    PubMed Central

    Ramos-Madrigal, Jazmín; Peñaloza, Fernando; Liu, Shanlin; Mikkel-Holger, S. Sinding; Riddhi, P. Patel; Martins, Renata; Lenz, Dorina; Fickel, Jörns; Roos, Christian; Shamsir, Mohd Shahir; Azman, Mohammad Shahfiz; Burton, K. Lim; Stephen, J. Rossiter; Wilting, Andreas

    2017-01-01

    Abstract Southeast (SE) Asia is 1 of the most biodiverse regions in the world, and it holds approximately 20% of all mammal species. Despite this, the majority of SE Asia's genetic diversity is still poorly characterized. The growing interest in using environmental DNA to assess and monitor SE Asian species, in particular threatened mammals—has created the urgent need to expand the available reference database of mitochondrial barcode and complete mitogenome sequences. We have partially addressed this need by generating 72 new mitogenome sequences reconstructed from DNA isolated from a range of historical and modern tissue samples. Approximately 55 gigabases of raw sequence were generated. From this data, we assembled 72 complete mitogenome sequences, with an average depth of coverage of ×102.9 and ×55.2 for modern samples and historical samples, respectively. This dataset represents 52 species, of which 30 species had no previous mitogenome data available. The mitogenomes were geotagged to their sampling location, where known, to display a detailed geographical distribution of the species. Our new database of 52 taxa will strongly enhance the utility of environmental DNA approaches for monitoring mammals in SE Asia as it greatly increases the likelihoods that identification of metabarcoding sequencing reads can be assigned to reference sequences. This magnifies the confidence in species detections and thus allows more robust surveys and monitoring programmes of SE Asia's threatened mammal biodiversity. The extensive collections of historical samples from SE Asia in western and SE Asian museums should serve as additional valuable material to further enrich this reference database. PMID:28873965

  1. An expanded mammal mitogenome dataset from Southeast Asia.

    PubMed

    Mohd Salleh, Faezah; Ramos-Madrigal, Jazmín; Peñaloza, Fernando; Liu, Shanlin; Mikkel-Holger, S Sinding; Riddhi, P Patel; Martins, Renata; Lenz, Dorina; Fickel, Jörns; Roos, Christian; Shamsir, Mohd Shahir; Azman, Mohammad Shahfiz; Burton, K Lim; Stephen, J Rossiter; Wilting, Andreas; Gilbert, M Thomas P

    2017-08-01

    Southeast (SE) Asia is 1 of the most biodiverse regions in the world, and it holds approximately 20% of all mammal species. Despite this, the majority of SE Asia's genetic diversity is still poorly characterized. The growing interest in using environmental DNA to assess and monitor SE Asian species, in particular threatened mammals-has created the urgent need to expand the available reference database of mitochondrial barcode and complete mitogenome sequences. We have partially addressed this need by generating 72 new mitogenome sequences reconstructed from DNA isolated from a range of historical and modern tissue samples. Approximately 55 gigabases of raw sequence were generated. From this data, we assembled 72 complete mitogenome sequences, with an average depth of coverage of ×102.9 and ×55.2 for modern samples and historical samples, respectively. This dataset represents 52 species, of which 30 species had no previous mitogenome data available. The mitogenomes were geotagged to their sampling location, where known, to display a detailed geographical distribution of the species. Our new database of 52 taxa will strongly enhance the utility of environmental DNA approaches for monitoring mammals in SE Asia as it greatly increases the likelihoods that identification of metabarcoding sequencing reads can be assigned to reference sequences. This magnifies the confidence in species detections and thus allows more robust surveys and monitoring programmes of SE Asia's threatened mammal biodiversity. The extensive collections of historical samples from SE Asia in western and SE Asian museums should serve as additional valuable material to further enrich this reference database. © The Author 2017. Published by Oxford University Press.

  2. The azimuthal and radial distributions of HI and H2 in NGC 6946

    NASA Technical Reports Server (NTRS)

    Tacconi-Garman, Linda J.; Young, Judith S.

    1987-01-01

    A study was completed of the atomic and molecular components of the ISM in NGC 6946. The distribution of molecular clouds was determined from a fully sampled CO map of the inner disk using the 14-meter telescope of the FCRAO. The distribution of atomic gas was derived from VLA observations at 40" resolution in the D configuration. When comparing the global CO and HI properties with other components of the galaxy, it was found that the azimuthally averaged radial distributions of CO, H-alpha, radio continuum and blue light all exhibit similar roughly exponential falloffs, while the azimuthally averaged HI surface densities vary by only a factor of 2 out to R = 16 kpc. This indicates that while the H-alpha/CO ratio is approximately constant with radius, the CO/HI ratio decreases by a factor of 30 from the center of the galaxy to R = 10 kpc.

  3. Measurement and Analysis of Porosity in Al-10Si-1Mg Components Additively Manufactured by Selective Laser Melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Suraj; Cunningham, Ross; Ozturk, Tugce

    Aluminum alloys are candidate materials for weight critical applications because of their excellent strength and stiffness to weight ratio. However, defects such as voids decrease the strength and fatigue life of these alloys, which can limit the application of Selective Laser Melting. In this study, the average volume fraction, average size, and size distribution of pores in Al10-Si-1Mg samples built using Selective Laser Melting have been characterized. Synchrotron high energy X-rays were used to perform computed tomography on volumes of order one cubic millimeter with a resolution of approximately 1.5 μm. Substantial variations in the pore size distributions were foundmore » as a function of process conditions. Even under conditions that ensured that all locations were melted at least once, a significant number density was found of pores above 5 μm in diameter.« less

  4. Bacterial diversity in a glacier foreland of the high Arctic.

    PubMed

    Schütte, Ursel M E; Abdo, Zaid; Foster, James; Ravel, Jacques; Bunge, John; Solheim, Bjørn; Forney, Larry J

    2010-03-01

    Over the past 100 years, Arctic temperatures have increased at almost twice the global average rate. One consequence is the acceleration of glacier retreat, exposing new habitats that are colonized by microorganisms whose diversity and function are unknown. Here, we characterized bacterial diversity along two approximately parallel chronosequences in an Arctic glacier forefield that span six time points following glacier retreat. We assessed changes in phylotype richness, evenness and turnover rate through the analysis of 16S rRNA gene sequences recovered from 52 samples taken from surface layers along the chronosequences. An average of 4500 sequences was obtained from each sample by 454 pyrosequencing. Using parametric methods, it was estimated that bacterial phylotype richness was high, and that it increased significantly from an average of 4000 (at a threshold of 97% sequence similarity) at locations exposed for 5 years to an average of 7050 phylotypes per 0.5 g of soil at sites that had been exposed for 150 years. Phylotype evenness also increased over time, with an evenness of 0.74 for 150 years since glacier retreat reflecting large proportions of rare phylotypes. The bacterial species turnover rate was especially high between sites exposed for 5 and 19 years. The level of bacterial diversity present in this High Arctic glacier foreland was comparable with that found in temperate and tropical soils, raising the question whether global patterns of bacterial species diversity parallel that of plants and animals, which have been found to form a latitudinal gradient and be lower in polar regions compared with the tropics.

  5. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

  6. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy.

    PubMed

    Wahl, N; Hennig, P; Wieser, H P; Bangert, M

    2017-06-26

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

  7. Earth's partial pressure of CO2 over the past 100-500 Ma; evidence from Ce anomalies in mostly shallow seas (less than 200 m) as recorded in carbonate sediments, 2

    NASA Technical Reports Server (NTRS)

    Liu, Y.-G.; Reinhardt, J. W.; Schmitt, R. A.

    1993-01-01

    We reported the direct relationship of Ce anomalies recorded in 0.2-119 Ma CaCO3 sediments (Ce(sup A*)) to the Ce anomalies in the parental Pacific deep seawater (Ce(sup A)) and their relationship to atmospheric P(CO2) relative to present P(CO2). We have analyzed continental CaCO3 samples that were deposited in ancient oceans and shallow sea platforms less than 200 m over central USA, central Europe, China, and Saudi-Arabia/Oman. We have plotted Ce(sup A*) over the 75-470 Ma interval. For P(CO2) calculations, we assumed as a reference standard the less than 200 m mixed Pacific Ocean with a Ce(sup A) geometric mean of 0.22 and a range of 0.10-0.43. Because P(CO2) values obtained from reliable deep Pacific Ocean carbonates in the 67-119 Ma interval were similar to the present P(CO2) values, we have drawn a 1.0 ratio for that interval. Although there is considerable scatter among the approximately 150 Ma carbonates, the average Ce(sup A*) value suggests that P(CO2) increased during the early Cretaceous, from 1.0X at approximately 120 Ma to about 1.4X at approximately 150 Ma. At approximately 250 Ma, the average Ce(sup A*) in 13 shallow sea China carbonates agrees well with the single and more reliable approximately 250 Ma China carbonate deposited in deeper open platform. We suggest that P(CO2) ranged from 1.4-1.7X over the Jurassic and Triassic periods. At approximately 280 Ma, three China carbonates deposited in deeper open platforms and therefore considered more reliable are consistent with a European carbonate, which indicate Ce(sup A) and P(CO2) values similar to the present. The minimum at this time corresponds to the great Permo-Carboniferous glaciation. From 280 Ma to 470 Ma, the trend favors increasing Ce(sup A*) and corresponding P(CO2) values between 1.9-2.7X, with a more reliable value closer to 2.7X at 430 Ma because of the unknown higher temperature in the less than 100 m seawater over continental USA which was located just south of the equator at approximately 430 Ma.

  8. Precipitation; ground-water age; ground-water nitrate concentrations, 1995-2002; and ground-water levels, 2002-03 in Eastern Bernalillo County, New Mexico

    USGS Publications Warehouse

    Blanchard, Paul J.

    2004-01-01

    The eastern Bernalillo County study area consists of about 150 square miles and includes all of Bernalillo County east of the crests of the Sandia and Manzanita Mountains. Soil and unconsolidated alluvial deposits overlie fractured and solution-channeled limestone in most of the study area. North of Interstate Highway 40 and east of New Mexico Highway 14, the uppermost consolidated geologic units are fractured sandstones and shales. Average annual precipitation at three long-term National Oceanic and Atmospheric Administration precipitation and snowfall data-collection sites was 14.94 inches at approximately 6,300 feet (Sandia Ranger Station), 19.06 inches at about 7,020 feet (Sandia Park), and 23.07 inches at approximately 10,680 feet (Sandia Crest). The periods of record at these sites are 1933-74, 1939-2001, and 1953-79, respectively. Average annual snowfall during these same periods of record was 27.7 inches at Sandia Ranger Station, 60.8 inches at Sandia Park, and 115.5 inches at Sandia Crest. Seven precipitation data-collection sites were established during December 2000-March 2001. Precipitation during 2001-03 at three U.S. Geological Survey sites ranged from 66 to 94 percent of period-of-record average annual precipitation at corresponding National Oceanic and Atmospheric Administration long-term sites in 2001, from 51 to 75 percent in 2002, and from 34 to 81 percent during January through September 2003. Missing precipitation records for one site resulted in the 34-percent value in 2003. Analyses of concentrations of chlorofluorocarbons CFC-11, CFC-12, and CFC-113 in ground-water samples from nine wells and one spring were used to estimate when the sampled water entered the ground-water system. Apparent ages of ground water ranged from as young as about 10 to 16 years to as old as about 20 to 26 years. Concentrations of dissolved nitrates in samples collected from 24 wells during 2001-02 were similar to concentrations in samples collected from the same wells during 1995, 1997, and (or) 1998. Nitrate concentrations in two wells were larger than the U.S. Environmental Protection Agency primary drinking-water regulation of 10 milligrams per liter in 1998 and in 2001. Ground-water levels were measured during June and July 2002 and during June, July, and August 2003 in 18 monitoring wells. The median change in water level for all 18 wells was a decline of 2.03 feet.

  9. Is routine pathological evaluation of tissue from gynecomastia necessary? A 15-year retrospective pathological and literature review.

    PubMed

    Senger, Jenna-Lynn; Chandran, Geethan; Kanthan, Rani

    2014-01-01

    To reconsider the routine plastic surgical practice of requesting histopathological evaluation of tissue from gynecomastia. The present study was a retrospective histopathological review (15-year period [1996 to 2012]) involving gynecomastia tissue samples received at the pathology laboratory in the Saskatoon Health Region (Saskatchewan). The Laboratory Information System (LIS) identified all specimens using the key search words "gynecomastia", "gynaecomastia", "gynecomazia" and "gynaecomazia". A literature review to identify all cases of incidentally discovered malignancies in gynecomastia tissue specimens over a 15-year period (1996 to present) was undertaken. The 15-year LIS search detected a total of 452 patients that included two cases of pseudogynecomastia (0.4%). Patients' age ranged from five to 92 years and 43% of the cases were bilateral (28% left sided, 29% right sided). The weight of the specimens received ranged from 0.2 g to 1147.2 g. All cases showed no significant histopathological concerns. The number of tissue blocks sampled ranged from one to 42, averaging four blocks/case (approximately $105/case), resulting in a cost of approximately $3,200/year, with a 15-year expenditure of approximately $48,000. The literature review identified a total of 15 incidental findings: ductal carcinoma in situ (12 cases), atypical ductal hyperplasia (two cases) and infiltrating ductal carcinoma (one case). In the context of evidence-based literature, and because no significant pathological findings were detected in this particular cohort of 452 cases with 2178 slides, the authors believe it is time to re-evaluate whether routine histopathological examination of tissue from gynecomastia remains necessary. The current climate of health care budget fiscal restraints warrants reassessment of the current policies and practices of sending tissue samples of gynecomastia incurring negative productivity costs on routine histopathological examination.

  10. Dust Attenuation and H(alpha) Star Formation Rates of Z Approx. 0.5 Galaxies

    NASA Technical Reports Server (NTRS)

    Ly, Chun; Malkan, Matthew A.; Kashikawa, Nobunari; Ota, Kazuaki; Shimasaku, Kazuhiro; Iye, Masanori; Currie, Thayne

    2012-01-01

    Using deep narrow-band and broad-band imaging, we identify 401 z approximately 0.40 and 249 z approximately 0.49 H-alpha line-emitting galaxies in the Subaru Deep Field. Compared to other H-alpha surveys at similar redshifts, our samples are unique since they probe lower H-alpha luminosities, are augmented with multi-wavelength (rest-frame 1000AA--1.5 microns) coverage, and a large fraction (20%) of our samples has already been spectroscopically confirmed. Our spectra allow us to measure the Balmer decrement for nearly 60 galaxies with H-beta detected above 5-sigma. The Balmer decrements indicate an average extinction of A(H-alpha)=0.7(uparrow){+1.4}_{-0.7} mag. We find that the Balmer decrement systematically increases with higher H-alpha luminosities and with larger stellar masses, in agreement with previous studies with sparser samples. We find that the SFRs estimated from modeling the spectral energy distribution (SED) is reliable---we derived an "intrinsic" H-alpha luminosity which is then reddened assuming the color excess from SED modeling. The SED-predicted H-alpha luminosity agrees with H-alpha narrow-band measurements over 3 dex (rms of 0.25 dex). We then use the SED SFRs to test different statistically-based dust corrections for H-alpha and find that adopting one magnitude of extinction is inappropriate: galaxies with lower luminosities are less reddened. We find that the luminosity-dependent dust correction of Hopkins et al. yields consistent results over 3 dex (rms of 0.3 dex). Our comparisons are only possible by assuming that stellar reddening is roughly half of nebular reddening. The strong correspondence argue that with SED modeling, we can derive reliable intrinsic SFRs even in the absence of H-alpha measurements at z approximately 0.5.

  11. Effects of thermal cycling parameters on residual stresses in alumina scales of CoNiCrAlY and NiCoCrAlY bond coats

    DOE PAGES

    Nordhorn, Christian; Mücke, Robert; Unocic, Kinga A.; ...

    2014-08-20

    In this paper, furnace cycling experiments were performed on free-standing high-velocity oxygen-fuel bond coat samples to investigate the effect of material composition, surface texture, and cycling conditions on the average stresses in the formed oxide scales after cooling. The oxide scale thicknesses were determined by SEM image analyses and information about the stresses were acquired by photo-stimulated luminescence-spectroscopy. Additionally, the scale thickness dependent stress fields were calculated in finite-element analyses including approximation functions for the surface roughness derived on the basis of profilometry data. The evolution of the average residual stress as a function of oxide scale thickness was subjectmore » to stochastic fluctuations predominantly caused by local scale spallations. In comparison to the supplemental modeling results, thermal stresses due to mismatch of thermal expansion coefficients are identified as the main contribution to the residual stresses. Finally, the theoretical results emphasize that analyses of spectroscopic data acquired for average stress investigations of alumina scales rely on detailed information about microstructural features.« less

  12. Litter mercury deposition in the Amazonian rainforest.

    PubMed

    Fostier, Anne Hélène; Melendez-Perez, José Javier; Richter, Larissa

    2015-11-01

    The objective of this work was to assess the flux of atmospheric mercury transferred to the soil of the Amazonian rainforest by litterfall. Calculations were based on a large survey of published and unpublished data on litterfall and Hg concentrations in litterfall samples from the Amazonian region. Litterfall based on 65 sites located in the Amazon rainforest averaged 8.15 ± 2.25 Mg ha(-1) y(-1). Average Hg concentrations were calculated from nine datasets for fresh tree leaves and ten datasets for litter, and a median concentration of 60.5 ng Hg g(-1) was considered for Hg deposition in litterfall, which averaged 49 ± 14 μg m(-2) yr(-1). This value was used to estimate that in the Amazonian rainforest, litterfall would be responsible for the annual removing of 268 ± 77 Mg of Hg, approximately 8% of the total atmospheric Hg deposition to land. The impact of the Amazon deforestation on the Hg biogeochemical cycle is also discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Reliability in the location of hindlimb motor representations in Fischer-344 rats: laboratory investigation.

    PubMed

    Frost, Shawn B; Iliakova, Maria; Dunham, Caleb; Barbay, Scott; Arnold, Paul; Nudo, Randolph J

    2013-08-01

    The purpose of the present study was to determine the feasibility of using a common laboratory rat strain for reliably locating cortical motor representations of the hindlimb. Intracortical microstimulation techniques were used to derive detailed maps of the hindlimb motor representations in 6 adult Fischer-344 rats. The organization of the hindlimb movement representation, while variable across individual rats in topographic detail, displayed several commonalities. The hindlimb representation was positioned posterior to the forelimb motor representation and posterolateral to the motor trunk representation. The areal extent of the hindlimb representation across the cortical surface averaged 2.00 ± 0.50 mm(2). Superimposing individual maps revealed an overlapping area measuring 0.35 mm(2), indicating that the location of the hindlimb representation can be predicted reliably based on stereotactic coordinates. Across the sample of rats, the hindlimb representation was found 1.25-3.75 mm posterior to the bregma, with an average center location approximately 2.6 mm posterior to the bregma. Likewise, the hindlimb representation was found 1-3.25 mm lateral to the midline, with an average center location approximately 2 mm lateral to the midline. The location of the cortical hindlimb motor representation in Fischer-344 rats can be reliably located based on its stereotactic position posterior to the bregma and lateral to the longitudinal skull suture at midline. The ability to accurately predict the cortical localization of functional hindlimb territories in a rodent model is important, as such animal models are being increasingly used in the development of brain-computer interfaces for restoration of function after spinal cord injury.

  14. Characterization of air contaminants formed by the interaction of lava and sea water.

    PubMed Central

    Kullman, G J; Jones, W G; Cornwell, R J; Parker, J E

    1994-01-01

    We made environmental measurements to characterize contaminants generated when basaltic lava from Hawaii's Kilauea volcano enters sea water. This interaction of lava with sea water produces large clouds of mist (LAZE). Island winds occasionally directed the LAZE toward the adjacent village of Kalapana and the Hawaii Volcanos National Park, creating health concerns. Environmental samples were taken to measure airborne concentrations of respirable dust, crystalline silica and other mineral compounds, fibers, trace metals, inorganic acids, and organic and inorganic gases. The LAZE contained quantifiable concentrations of hydrochloric acid (HCl) and hydrofluoric acid (HF); HCl was predominant. HCl and HF concentrations were highest in dense plumes of LAZE near the sea. The HCl concentration at this sampling location averaged 7.1 ppm; this exceeds the current occupational exposure ceiling of 5 ppm. HF was detected in nearly half the samples, but all concentrations were <1 ppm Sulfur dioxide was detected in one of four short-term indicator tube samples at approximately 1.5 ppm. Airborne particulates were composed largely of chloride salts (predominantly sodium chloride). Crystalline silica concentrations were below detectable limits, less than approximately 0.03 mg/m3 of air. Settled dust samples showed a predominance of glass flakes and glass fibers. Airborne fibers were detected at quantifiable levels in 1 of 11 samples. These fibers were composed largely of hydrated calcium sulfate. These findings suggest that individuals should avoid concentrated plumes of LAZE near its origin to prevent over exposure to inorganic acids, specifically HCl. Images Figure 1. Figure 2. Figure 3. Figure 4. A Figure 4. B Figure 4. C Figure 4. D PMID:8593853

  15. Bioaccumulation of metals by lichens: Uptake of aqueous uranium by Peltigera membranacea as a function of time and pH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, J.R.; Bailey, E.H.; Purvis, O.W.

    1998-11-01

    Uranium sorption experiments were carried out at {approximately}25 C using natural samples of the lichen Peltigera membranacea. Thalli were incubated in solutions containing 100 ppm U for up to 24 h at pH values from 2 to 10. Equilibrium sorption was not observed at less than {approximately}6 h under any pH condition. U sorption was strongest in the pH range 4--5, with maximum sorption occurring at a pH of 4.5 and an incubation time of 24 h. Maximum U uptake by P. membranacea averaged {approximately}42,000 ppm, or {approximately}4.2 wt% U. This appears to represent the highest concentration of biosorbed U,more » relative to solution U activity, of any lichen reported to date. Investigation of post-experimental lichen tissues using electron probe microanalysis (EPM) reveals that U uptake is spatially heterogeneous within the lichen body, and that U attains very high local concentrations on scattered areas of the upper cortex. Energy dispersive spectroscopic (EDS) analysis reveals that strong U uptake correlates with P signal intensity, suggesting involvement of biomass-derived phosphate ligands or surface functional groups in the uptake process.« less

  16. Shelf-life extension of vacuum-packaged meat from pheasant (Phasianus colchicus) by lactic acid treatment.

    PubMed

    Pfeifer, Agathe; Smulders, Frans J M; Paulsen, Peter

    2014-07-01

    We investigated the influence of lactic acid treatment of pheasant meat before vacuum-packaged storage of 3, 7, and 10 d at +6°C on microbiota and pH. Breast muscle samples were collected from carcasses of slaughtered as well as from hunted (shot) wild pheasants. Immersion of meat samples in 3% (wt/wt) lactic acid for 60 s effectuated a significant drop in pH of approximately 0.5 to 0.7 units, which remained during the entire storage period. In parallel, total aerobic counts of such treated and stored samples were on an average 1.5 to 1.7 log units lower than in non-acid-treated samples. Similar results were found for Enterobacteriaceae. A significant decrease in pH was measured at d 7 and 10 in the acid-treated samples in comparison with the untreated ones. In summary, the immersion of pheasant breast meat cuts in dilute lactic acid significantly reduced microbiota during vacuum-packed storage, even at slight temperature abuse conditions. © 2014 Poultry Science Association Inc.

  17. Mercury in the blood and eggs of American kestrels fed methylmercury chloride

    USGS Publications Warehouse

    French, J.B.; Bennett, R.S.; Rossmann, R.

    2010-01-01

    American kestrels (Falco sparverius) were fed diets containing methylmercury chloride (MeHg) at 0, 0.6, 1.7, 2.8, 3.9, or 5.0 ??g/g (dry wt) starting approximately eight weeks before the onset of egg laying. Dietary treatment was terminated after 12 to 14 weeks, and unhatched eggs were collected for Hg analysis. Blood samples were collected after four weeks of treatment and the termination of the study (i.e., 12-14 weeks of treatment). Clutch size decreased at dietary concentrations above 2.8 ??g/g. The average total mercury concentration in clutches of eggs and in the second egg laid (i.e., egg B) increased linearly with dietary concentration. Mercury concentrations in egg B were approximately 25% lower than in the first egg laid and similar in concentration to the third egg laid. Mercury concentrations in whole blood and plasma also increased linearly with dietary concentration. Total Hg concentrations in June blood samples were lower than those in April, despite 8 to 10 weeks of additional dietary exposure to MeHg in the diet. This is likely because of excretion of Hg into growing flight feathers beginning shortly after the start of egg production. The strongest relationships between Hg concentrations in blood and eggs occurred when we used blood samples collected in April before egg laying and feather molt. ?? 2010 SETAC.

  18. Aviation-related injury morbidity and mortality: data from U.S. health information systems.

    PubMed

    Baker, Susan P; Brady, Joanne E; Shanahan, Dennis F; Li, Guohua

    2009-12-01

    Information about injuries sustained by survivors of airplane crashes is scant, although some information is available on fatal aviation-related injuries. Objectives of this study were to explore the patterns of aviation-related injuries admitted to U.S. hospitals and relate them to aviation deaths in the same period. The Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS) contains information for approximately 20% of all hospital admissions in the United States each year. We identified patients in the HCUP NIS who were hospitalized during 2000-2005 for aviation-related injuries based on the International Classification of Diseases, 9th Revision, codes E840-E844. Injury patterns were also examined in relation to information from multiple-cause-of-death public-use data files 2000-2005. Nationally, an estimated 6080 patients in 6 yr, or 1013 admissions annually (95% confidence interval 894-1133), were hospitalized for aviation-related injuries, based on 1246 patients in the sample. The average hospital stay was 6.3 d and 2% died in hospital. Occupants of non-commercial aircraft accounted for 32% of patients, parachutists for 29%; occupants of commercial aircraft and of unpowered aircraft each constituted 11%. Lower-limb fracture was the most common injury in each category, constituting 27% of the total, followed by head injury (11%), open wound (10%), upper extremity fracture, and internal injury (9%). Among fatalities, head injury (38%) was most prominent. An average of 753 deaths occurred annually; for each death there were 1.3 hospitalizations. Aviation-related injuries result in approximately 1000 hospitalizations each year in the United States, with an in-hospital mortality rate of 2%. The most common injury sustained by aviation crash survivors is lower-limb fracture.

  19. Hypertension prevalence, awareness, treatment, and control and sodium intake in Shandong Province, China: baseline results from Shandong-Ministry of Health Action on Salt Reduction and Hypertension (SMASH), 2011.

    PubMed

    Bi, Zhenqiang; Liang, Xiaofeng; Xu, Aiqiang; Wang, Linghong; Shi, Xiaoming; Zhao, Wenhua; Ma, Jixiang; Guo, Xiaolei; Zhang, Xiaofei; Zhang, Jiyu; Ren, Jie; Yan, Liuxia; Lu, Zilong; Wang, Huicheng; Tang, Junli; Cai, Xiaoning; Dong, Jing; Zhang, Juan; Chu, Jie; Engelgau, Michael; Yang, Quanhe; Hong, Yuling; Wang, Yu

    2014-05-22

    In China, population-based blood pressure levels and prevalence of hypertension are increasing. Meanwhile, sodium intake, a major risk factor for hypertension, is high. In 2011, to develop intervention priorities for a salt reduction and hypertension control project in Shandong Province (population 96 million), a cross-sectional survey was conducted to collect information on sodium intake and hypertension prevalence, awareness, treatment, and control. Complex, multistage sampling methods were used to select a provincial-representative adult sample. Blood pressure was measured and a survey conducted among all participants; condiments were weighed in the household, a 24-hour dietary recall was conducted, and urine was collected. Hypertension was determined by blood pressure measured on a single occasion and self-reported use of antihypertension medications. Overall, 23.4% (95% confidence interval [CI], 20.9%-26.0%) of adults in Shandong were estimated to have hypertension. Among those classified as having hypertension, approximately one-third (34.5%) reported having hypertension, approximately one-fourth (27.5%) reported taking medications, and one-seventh (14.9%) had their blood pressure controlled (<140/<90 mm Hg). Estimated total average daily dietary sodium intake was 5,745 mg (95% CI, 5,428 mg-6,063 mg). Most dietary sodium (80.8%) came from salt and high-salt condiments added during cooking: a sodium intake of 4,640 mg (95% CI, 4,360 mg-4,920 mg). The average daily urinary sodium excretion was 5,398 mg (95% CI, 5,112 mg-5,683 mg). Hypertension and excessive sodium intake in adults are major public health problems in Shandong Province, China.

  20. Aviation-Related Injury Morbidity and Mortality: Data from U.S. Health Information Systems

    PubMed Central

    Baker, Susan P.; Brady, Joanne E.; Shanahan, Dennis F.; Li, Guohua

    2010-01-01

    Introduction Information about injuries sustained by survivors of airplane crashes is scant, although some information is available on fatal aviation-related injuries. Objectives of this study were to explore the patterns of aviation-related injuries admitted to U.S. hospitals and relate them to aviation deaths in the same period. Methods The Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS) contains information for approximately 20% of all hospital admissions in the United States each year. We identified patients in the HCUP NIS who were hospitalized during 2000–2005 for aviation-related injuries based on the International Classification of Diseases, 9th Revision, codes E840–E844. Injury patterns were also examined in relation to information from multiple-cause-of-death public-use data files 2000–2005. Results Nationally, an estimated 6080 patients in 6 yr, or 1013 admissions annually (95% confidence interval 894–1133), were hospitalized for aviation-related injuries, based on 1246 patients in the sample. The average hospital stay was 6.3 d and 2% died in hospital. Occupants of noncommercial aircraft accounted for 32% of patients, parachutists for 29%; occupants of commercial aircraft and of unpowered aircraft each constituted 11%. Lower-limb fracture was the most common injury in each category, constituting 27% of the total, followed by head injury (11%), open wound (10%), upper extremity fracture, and internal injury (9%). Among fatalities, head injury (38%) was most prominent. An average of 753 deaths occurred annually; for each death there were 1.3 hospitalizations. Conclusions Aviation-related injuries result in approximately 1000 hospitalizations each year in the United States, with an in-hospital mortality rate of 2%. The most common injury sustained by aviation crash survivors is lower-limb fracture. PMID:20027845

  1. Hypertension Prevalence, Awareness, Treatment, and Control and Sodium Intake in Shandong Province, China: Baseline Results From Shandong–Ministry of Health Action on Salt Reduction and Hypertension (SMASH), 2011

    PubMed Central

    Bi, Zhenqiang; Liang, Xiaofeng; Xu, Aiqiang; Wang, Linghong; Shi, Xiaoming; Zhao, Wenhua; Ma, Jixiang; Guo, Xiaolei; Zhang, Xiaofei; Zhang, Jiyu; Ren, Jie; Yan, Liuxia; Lu, Zilong; Wang, Huicheng; Tang, Junli; Cai, Xiaoning; Dong, Jing; Zhang, Juan; Chu, Jie; Engelgau, Michael; Yang, Quanhe; Hong, Yuling

    2014-01-01

    Introduction In China, population-based blood pressure levels and prevalence of hypertension are increasing. Meanwhile, sodium intake, a major risk factor for hypertension, is high. In 2011, to develop intervention priorities for a salt reduction and hypertension control project in Shandong Province (population 96 million), a cross-sectional survey was conducted to collect information on sodium intake and hypertension prevalence, awareness, treatment, and control. Methods Complex, multistage sampling methods were used to select a provincial-representative adult sample. Blood pressure was measured and a survey conducted among all participants; condiments were weighed in the household, a 24-hour dietary recall was conducted, and urine was collected. Hypertension was determined by blood pressure measured on a single occasion and self-reported use of antihypertension medications. Results Overall, 23.4% (95% confidence interval [CI], 20.9%–26.0%) of adults in Shandong were estimated to have hypertension. Among those classified as having hypertension, approximately one-third (34.5%) reported having hypertension, approximately one-fourth (27.5%) reported taking medications, and one-seventh (14.9%) had their blood pressure controlled (<140/<90 mm Hg). Estimated total average daily dietary sodium intake was 5,745 mg (95% CI, 5,428 mg–6,063 mg). Most dietary sodium (80.8%) came from salt and high-salt condiments added during cooking: a sodium intake of 4,640 mg (95% CI, 4,360 mg–4,920 mg). The average daily urinary sodium excretion was 5,398 mg (95% CI, 5,112 mg–5,683 mg). Conclusion Hypertension and excessive sodium intake in adults are major public health problems in Shandong Province, China. PMID:24854239

  2. Characterization of faulted dislocation loops and cavities in ion irradiated alloy 800H

    NASA Astrophysics Data System (ADS)

    Ulmer, Christopher J.; Motta, Arthur T.

    2018-01-01

    Alloy 800H is a high nickel austenitic stainless steel with good high temperature mechanical properties which is considered for use in current and advanced nuclear reactor designs. The irradiation response of 800H was examined by characterizing samples that had been bulk ion irradiated at the Michigan Ion Beam Laboratory with 5 MeV Fe2+ ions to 1, 10, and 20 dpa at 440 °C. Transmission electron microscopy was used to measure the size and density of both {111} faulted dislocation loops and cavities as functions of depth from the irradiated surface. The faulted loop density increased with dose from 1 dpa up to 10 dpa where it saturated and remained approximately the same until 20 dpa. The faulted loop average diameter decreased between 1 dpa and 10 dpa and again remained approximately constant from 10 dpa to 20 dpa. Cavities were observed after irradiation doses of 10 and 20 dpa, but not after 1 dpa. The average diameter of cavities increased with dose from 10 to 20 dpa, with a corresponding small decrease in density. Cavity denuded zones were observed near the irradiated surface and near the ion implantation peak. To further understand the microstructural evolution of this alloy, FIB lift-out samples from material irradiated in bulk to 1 and 10 dpa were re-irradiated in-situ in their thin-foil geometry with 1 MeV Kr2+ ions at 440 °C at the Intermediate Voltage Electron Microscope. It was observed that the cavities formed during bulk irradiation shrank under thin-foil irradiation in-situ while dislocation loops were observed to grow and incorporate into the dislocation network. The thin-foil geometry used for in-situ irradiation is believed to cause the cavities to shrink.

  3. Crack networks in damaged glass

    NASA Astrophysics Data System (ADS)

    Mallet, Celine; Fortin, Jerome; Gueguen, Yves

    2013-04-01

    We investigate how cracks develop and propagate in synthetic glass samples. Cracks are introduced in glass by a thermal shock of 300oC. Crack network is documented from optical and electronic microscopy on these samples that have been submitted to a thermal shock only. Samples are cylinder of 80 mm length and 40 mm diameter. Sections were cut along the cylinder axis and perpendicular to it. Using SEM, crack lengths and apertures can be measured. Optical microscopy allows to get the crack distribution over the entire sample. The sample average crack length is 3 mm. The average aperture is 6 ± 3μm. There is however a clear difference between the sample core, where the crack network has approximatively a transverse isotrope symmetry and the outer ring, where cracks are smaller and more numerous. By measuring before and after the thermal treatment the radial P and S wave velocities in room conditions, we can determine the total crack density which is 0.24. Thermally cracked samples, as described above, were submitted to creep tests. Constant axial stress and lateral stress were applied. Several experiments were performed at different stress values. Samples are saturated for 48 hours (to get an homogeneous pore fluid distribution), the axial stress is increased up to 80% of the sample strength. Stress step tests were performed in order to get creep data. The evolution of strain (axial and radial strain) is measured using strain gages, gap sensors (for the global axial strain) and pore volume change (for the volumetric strain). Creep data are interpreted as evidence of sub-critical crack growth in the cracked glass samples. The above microstructural observations are used, together with a crack propagation model, to account for the creep behavior. Assuming that (i) the observed volumetric strain rate is due to crack propagation and (ii) crack aspect ratio is constant we calculate the creep rate. We obtain some value on the crack propagation during a 24 hours of constant stress test. At each of these test, crack propagate of 0.3 to 0.4 mm. From the initial average crack length of 3 mm, the crack reach the size of 5.8 mm at the end of a complete creep test (with 8 constant stress step of 24 hours).

  4. Practical approximations to quantify the impact of time windows and delivery sizes on VMT multi-stop tours.

    DOT National Transportation Integrated Search

    2009-04-01

    This paper studies approximations to the average length of Vehicle Routing Problems (VRP). The approximations are valuable for strategic and : planning analysis of transportation and logistics problems. The research focus is on VRP with varying numbe...

  5. [The scale and application of the norm of occupational stress on the professionals in Chengdu and Chongqing area].

    PubMed

    Zeng, Fan-Hua; Wang, Zhi-Ming; Wang, Mian-Zhen; Lan, Ya-Jia

    2004-12-01

    To establish the scale of the norm of occupational stress on the professionals and put it into practice. T scores were linear transformations of raw scores, derived to have a mean of 50 and a standard deviation of 10. The scale standard of the norm was formulated in line with the principle of normal distribution. (1) For the occupational role questionnaire (ORQ) and personal strain questionnaire (PSQ) scales, high scores suggested significant levels of occupational stress and psychological strain, respectively. T scores >/= 70 indicated a strong probability of maladaptive stress, debilitating strain, or both. T scores in 60 approximately 69 suggested mild levels of maladaptive stress and strain, and in 40 approximately 59 were within one standard deviation of the mean and should be interpreted as being within normal range. T scores < 40 indicated a relative absence of occupational stress or psychological strain. For the personal resources questionnaire (PRQ) scales, high scores indicated highly developed coping resources. T scores < 30 indicated a significant lack of coping resources. T scores in 30 approximately 39 suggested mild deficits in coping skills, and in 40 approximately 59 indicated average coping resources, where as higher scores (i.e., >/= 60) indicated increasingly strong coping resources. (2) This study provided raw score to T-score conversion tables for each OSI-R scale for the total normative sample as well as for gender, and several occupational groups, including professional engineer, professional health care, economic business, financial business, law, education and news. OSI-R profile forms for total normative samples, gender and occupation were also offered according to the conversion tables. The norm of occupational stress can be used as screening tool, organizational/occupational assessment, guide to occupational choice and intervention measures.

  6. Potency of Δ9 -tetrahydrocannabinol and other cannabinoids in cannabis in England in 2016: Implications for public health and pharmacology.

    PubMed

    Potter, David J; Hammond, Kathy; Tuffnell, Shaun; Walker, Christopher; Di Forti, Marta

    2018-04-01

    In 2005 and 2008, studies reported that cannabis in England had become dominated by the sinsemilla (unseeded female) form. The average potency (Δ 9 -tetrahydrocannabinol [THC] content) of this material had doubled over the previous decade. Cannabis resin then circulating contained approximately equal ratios of THC and cannabidiol (CBD), whereas sinsemilla was almost devoid of CBD. Despite raised health concerns regarding sinsemilla use and the development of psychotic disorders, no update on street cannabis potency has been published since 2008. A total of 995 seized cannabis samples were acquired from the same 5 constabulary areas included in the 2005 study. The differing forms were segregated, and a representative 460 samples analyzed to assess their cannabinoid content using gas chromatography. The resultant median sinsemilla potency of 14.2% THC was similar to that observed in 2005 (13.9%). In each case, sinsemilla contained minimal CBD. Compared with 2005, resin had significantly higher mean THC (6.3%) and lower CBD (2.3%) contents (p < 0.0001). Although the average THC concentration in sinsemilla samples across the 5 constabularies has remained stable since 2005, the availability of this potent form of cannabis has further increased. Moreover, the now rarer resin samples show significantly decreased CBD contents and CBD:THC ratios, leaving the United Kingdom's cannabis street market populated by high-potency varieties of cannabis, which may have concerning implications for public health. Copyright © 2018 John Wiley & Sons, Ltd.

  7. A comparison of dioxins, dibenzofurans and coplanar PCBs in uncooked and broiled ground beef, catfish and bacon.

    PubMed

    Schecter, A; Dellarco, M; Päpke, O; Olson, J

    1998-01-01

    The primary source of dioxins (PCDDs), dibenzofurans (PCDFs) and coplanar PCBs for the general population is food, especially meat, fish, and dairy products. However, most data on the levels of these chemicals is from food in the raw or uncooked state. We report here the effect of one type of cooking (broiling) on the levels of PCDDs, PCDFs, and coplanar PCBs in ground beef (hamburger), bacon and catfish. Samples of hamburger, bacon, and catfish were broiled and compared to uncooked samples in order to measure changes in the amounts of dioxins in cooked food. The total amount of PCDD, PCDF, and coplanar PCB TEQ decreased by approximately 50% on average for each portion as a result of broiling the hamburger, bacon and catfish specimens. The mean concentration (pg TEQ/kg, wet weight) of PCDDs, PCDFs, and coplanar PCBs, however, remained the same in the hamburger, increased by 83% in the bacon, and decreased by 34% in the catfish. On average, the total measured concentration (pg/kg) of the congeners of PCDDs, PCDFs, and coplanar PCBs increased 14% in the hamburger, increased 29% in the bacon, and decreased 33% in the catfish.

  8. Final Work Plan: Phase I investigation at Eustis, Nebraska

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaFreniere, Lorraine M.

    2013-05-01

    The village of Eustis is located in the northeast corner of Frontier County, Nebraska (Figure 1.1), near Interstate 80 and approximately 190 mi west of Lincoln. From 1950 to 1964, the Commodity Credit Corporation (CCC), an agency of the U.S. Department of Agriculture (USDA), operated a grain storage facility at the southeastern edge of Eustis. During this time, commercial grain fumigants containing carbon tetrachloride were in common use to preserve grain in storage. In July 2011, the Nebraska Department of Health and Human Services (NDHHS) calculated a running annual average concentration of carbon tetrachloride in groundwater from one of themore » Eustis public water supply wells (PWS 70-1) at 5.24 μg/L, exceeding the maximum contaminant level (MCL) of 5.0 μg/L. The running average value was calculated on the basis of results (4.01-6.87 μg/L) from four groundwater sampling events in 2011 for well PWS 70-1 (NDHHS 2011). On January 16, 2012, the village placed well PWS 70-1 on backup/standby status for emergency use only (Village of Eustis 2012). This results of this groundwater sampling are presented here.« less

  9. Prokaryotic community profiling of local algae wastewaters using advanced 16S rRNA gene sequencing.

    PubMed

    Limayem, Alya; Micciche, Andrew; Nayak, Bina; Mohapatra, Shyam

    2018-01-01

    Algae biomass-fed wastewaters are a promising source of lipid and bioenergy manufacture, revealing substantial end-product investment returns. However, wastewaters would contain lytic pathogens carrying drug resistance detrimental to algae yield and environmental safety. This study was conducted to simultaneously decipher through high-throughput advanced Illumina 16S ribosomal RNA (rRNA) gene sequencing, the cultivable and uncultivable bacterial community profile found in a single sample that was directly recovered from the local wastewater systems. Samples were collected from two previously documented sources including anaerobically digested (AD) municipal wastewater and swine wastewater with algae namely Chlorella spp. in addition to control samples, swine wastewater, and municipal wastewater without algae. Results indicated the presence of a significant level of Bacteria in all samples with an average of approximately 95.49% followed by Archaea 2.34%, in local wastewaters designed for algae cultivation. Taxonomic genus identification indicated the presence of Calothrix, Pseudomonas, and Clostridium as the most prevalent strains in both local municipal and swine wastewater samples containing algae with an average of 17.37, 12.19, and 7.84%, respectively. Interestingly, swine wastewater without algae displayed the lowest level of Pseudomonas strains < 0.1%. The abundance of some Pseudomonas species in wastewaters containing algae indicates potential coexistence between these strains and algae microenvironment, suggesting further investigations. This finding was particularly relevant for the earlier documented adverse effects of some nosocomial Pseudomonas strains on algae growth and their multidrug resistance potential, requiring the development of targeted bioremediation with regard to the beneficial flora.

  10. Cortisol Awakening Response in Elite Military Men: Summary Parameters, Stability Measurement, and Effect of Compliance.

    PubMed

    Taylor, Marcus K; Hernández, Lisa M; Fuller, Shiloah A; Sargent, Paul; Padilla, Genieleah A; Harris, Erica

    2016-11-01

    The cortisol awakening response (CAR) holds promise as a clinically important marker of health status. However, CAR research is routinely challenged by its innate complexity, sensitivity to confounds, and methodological inconsistencies. In this unprecedented characterization of CAR in elite military men (N = 58), we established summary parameters, evaluated sampling stability across two consecutive days, and explored the effect of subject compliance. Average salivary cortisol concentrations increased nearly 60% within 30 minutes of waking, followed by a swift recovery to waking values at 60 minutes. Approximately one in six were classified as negative responders (i.e., <0% change from waking to 30-minute postawakening). Three summary parameters of magnitude, as well as three summary parameters of pattern, were computed. Consistent with our hypothesis, summary parameters of magnitude displayed superior stability compared with summary parameters of pattern in the total sample. As expected, compliance with target sampling times was relatively good; average deviations of self-reported morning sampling times in relation to actigraph-derived wake times across both days were within ±5 minutes, and nearly two-thirds of the sample was classified as CAR compliant across both days. Although compliance had equivocal effects on some measures of magnitude, it substantially improved the stability of summary parameters of pattern. The first of its kind, this study established the foundation for a program of CAR research in a profoundly resilient yet chronically stressed population. Building from this, our forthcoming research will evaluate demographic, biobehavioral, and clinical determinants of CAR in this unique population. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  11. Focus on Teacher Salaries: An Update on Average Salaries and Recent Legislative Actions in the SREB States.

    ERIC Educational Resources Information Center

    Gaines, Gale F.

    Focused state efforts have helped teacher salaries in Southern Regional Education Board (SREB) states move toward the national average. Preliminary 2000-01 estimates put SREB's average teacher salary at its highest point in 22 years compared to the national average. The SREB average teacher salary is approximately 90 percent of the national…

  12. Assessment of suspended-sediment transport, bedload, and dissolved oxygen during a short-term drawdown of Fall Creek Lake, Oregon, winter 2012-13

    USGS Publications Warehouse

    Schenk, Liam N.; Bragg, Heather M.

    2014-01-01

    The drawdown of Fall Creek Lake resulted in the net transport of approximately 50,300 tons of sediment from the lake during a 6-day drawdown operation, based on computed daily values of suspended-sediment load downstream of Fall Creek Dam and the two main tributaries to Fall Creek Lake. A suspended-sediment budget calculated for 72 days of the study period indicates that as a result of drawdown operations, there was approximately 16,300 tons of sediment deposition within the reaches of Fall Creek and the Middle Fork Willamette River between Fall Creek Dam and the streamgage on the Middle Fork Willamette River at Jasper, Oregon. Bedload samples collected at the station downstream of Fall Creek Dam during the drawdown were primarily composed of medium to fine sands and accounted for an average of 11 percent of the total instantaneous sediment load (also termed sediment discharge) during sample collection. Monitoring of dissolved oxygen at the station downstream of Fall Creek Dam showed an initial decrease in dissolved oxygen concurrent with the sediment release over the span of 5 hours, though the extent of dissolved oxygen depletion is unknown because of extreme and rapid fouling of the probe by the large amount of sediment in transport. Dissolved oxygen returned to background levels downstream of Fall Creek Dam on December 18, 2012, approximately 1 day after the end of the drawdown operation.

  13. Legendre-tau approximation for functional differential equations. Part 2: The linear quadratic optimal control problem

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1984-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  14. Legendre-tau approximation for functional differential equations. II - The linear quadratic optimal control problem

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi; Teglas, Russell

    1987-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  15. Elastic anisotropy of layered rocks: Ultrasonic measurements of plagioclase-biotite-muscovite (sillimanite) gneiss versus texture-based theoretical predictions (effective media modeling)

    NASA Astrophysics Data System (ADS)

    Ivankina, T. I.; Zel, I. Yu.; Lokajicek, T.; Kern, H.; Lobanov, K. V.; Zharikov, A. V.

    2017-08-01

    In this paper we present experimental and theoretical studies on a highly anisotropic layered rock sample characterized by alternating layers of biotite and muscovite (retrogressed from sillimanite) and plagioclase and quartz, respectively. We applied two different experimental methods to determine seismic anisotropy at pressures up to 400 MPa: (1) measurement of P- and S-wave phase velocities on a cube in three foliation-related orthogonal directions and (2) measurement of P-wave group velocities on a sphere in 132 directions The combination of the spatial distribution of P-wave velocities on the sphere (converted to phase velocities) with S-wave velocities of three orthogonal structural directions on the cube made it possible to calculate the bulk elastic moduli of the anisotropic rock sample. On the basis of the crystallographic preferred orientations (CPOs) of major minerals obtained by time-of-flight neutron diffraction, effective media modeling was performed using different inclusion methods and averaging procedures. The implementation of a nonlinear approximation of the P-wave velocity-pressure relation was applied to estimate the mineral matrix properties and the orientation distribution of microcracks. Comparison of theoretical calculations of elastic properties of the mineral matrix with those derived from the nonlinear approximation showed discrepancies in elastic moduli and P-wave velocities of about 10%. The observed discrepancies between the effective media modeling and ultrasonic velocity data are a consequence of the inhomogeneous structure of the sample and inability to perform long-wave approximation. Furthermore, small differences between elastic moduli predicted by the different theoretical models, including specific fabric characteristics such as crystallographic texture, grain shape and layering were observed. It is shown that the bulk elastic anisotropy of the sample is basically controlled by the CPO of biotite and muscovite and their volume proportions in the layers dominated by phyllosilicate minerals.

  16. Effect of molecular anisotropy on beam scattering measurements

    NASA Technical Reports Server (NTRS)

    Goldflam, R.; Green, S.; Kouri, D. J.; Monchick, L.

    1978-01-01

    Within the energy sudden approximation, the total integral and total differential scattering cross sections are given by the angle average of scattering cross sections computed at fixed rotor orientations. Using this formalism the effect of molecular anisotropy on scattering of He by HCl and by CO is examined. Comparisons with accurate close coupling calculations indicate that this approximation is quite reliable, even at very low collision energies, for both of these systems. Comparisons are also made with predictions based on the spherical average of the interaction. For HCl the anisotropy is rather weak and its main effect is a slight quenching of the oscillations in the differential cross sections relative to predictions of the spherical averaged potential. For CO the anisotropy is much stronger, so that the oscillatory pattern is strongly quenched and somewhat shifted. It appears that the sudden approximation provides a simple yet accurate method for describing the effect of molecular anisotropy on scattering measurements.

  17. Two-layer interfacial flows beyond the Boussinesq approximation: a Hamiltonian approach

    NASA Astrophysics Data System (ADS)

    Camassa, R.; Falqui, G.; Ortenzi, G.

    2017-02-01

    The theory of integrable systems of Hamiltonian PDEs and their near-integrable deformations is used to study evolution equations resulting from vertical-averages of the Euler system for two-layer stratified flows in an infinite two-dimensional channel. The Hamiltonian structure of the averaged equations is obtained directly from that of the Euler equations through the process of Hamiltonian reduction. Long-wave asymptotics together with the Boussinesq approximation of neglecting the fluids’ inertia is then applied to reduce the leading order vertically averaged equations to the shallow-water Airy system, albeit in a non-trivial way. The full non-Boussinesq system for the dispersionless limit can then be viewed as a deformation of this well known equation. In a perturbative study of this deformation, a family of approximate constants of the motion are explicitly constructed and used to find local solutions of the evolution equations by means of hodograph-like formulae.

  18. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  19. Formation of hydrogen peroxide from illuminated polar snows and frozen solutions of model compounds

    NASA Astrophysics Data System (ADS)

    Hullar, Ted; Patten, Kelley; Anastasio, Cort

    2012-08-01

    Hydrogen peroxide (HOOH) is an important trace constituent in snow and ice, including in Arctic and Antarctic ice cores. To better understand the budget of snowpack HOOH, here we examine its production in illuminated snow and ice. To evaluate what types of compounds might be important photochemical sources of HOOH, we first illuminated laboratory ice samples containing 10 different model organic compounds: guaiacol, phenol, syringol, benzoate, formate, octanal, octanoic acid, octanedioic acid, phenylalanine, and mixtures of oxalate with iron (III). Half of these compounds produced little or no HOOH during illumination, but two classes of compounds were very reactive: phenolic compounds (with rates of HOOH of 6-62 nM-HOOH h-1 μM-1-phenolic) and mixtures of Fe(III) with a stoichiometric excess of oxalate (with rates of HOOH production as high as 2,000,000 nM h-1 per μM iron). To quantify rates of HOOH production in the environment we also illuminated snow samples collected from the Arctic and Antarctic. The average (±1σ) HOOH production rate in these samples was low, 5.3 ± 5.0 nM h-1 and replicate measurements showed high variability. In some natural samples there was an initial burst of HOOH production (with a rate approximately 10 times higher than the average production rate), followed by reduced rates at subsequent time points. Although our laboratory ice samples reveal that illuminated organics and metal-organic complexes can form HOOH, the low rates of HOOH formation in the Arctic and Antarctic snow samples suggest this process has only a modest impact on the HOOH budget in the snowpack.

  20. Surveys of rice sold in Canada for aflatoxins, ochratoxin A and fumonisins

    PubMed Central

    Bansal, J.; Pantazopoulos, P.; Tam, J.; Cavlovic, P.; Kwong, K.; Turcotte, A.-M.; Lau, B.P.-Y.; Scott, P.M.

    2011-01-01

    Approximately 200 samples of rice (including white, brown, red, black, basmati and jasmine, as well as wild rice) from several different countries, including the United States, Canada, Pakistan, India and Thailand, were analysed for aflatoxins, ochratoxin A (OTA) and fumonisins by separate liquid Chromatographic methods in two different years. The mean concentrations for aflatoxin B1 (AFB1) were 0.19 and 0.17 ng g−1 with respective positive incidences of 56% and 43% (≥ the limit of detection (LOD) of 0.002 ng g−1). Twenty-three samples analysed in the second year also contained aflatoxin B2 (AFB2) at levels ≥LOD of 0.002 ng g−1 The five most contaminated samples in each year contained 1.44–7.14 ng AFB1 g−1 (year 1) and 1.45–3.48 ng AFB1 g−1 (year 2); they were mostly basmati rice from India and Pakistan and black and red rice from Thailand. The average concentrations of ochratoxin A (OTA) were 0.05 and 0.005 ng g−1 in year 1 and year 2, respectively; incidences of samples containing ≥LOD of 0.05 ng g−1 were 43% and 1%, respectively, in the 2 years. All positive OTA results were confirmed by LC-MS/MS. For fumonisins, concentrations of fumonisin B1 (FB1) averaged 4.5 ng g−1 in 15 positive samples (≥0.7 ng g−1) from year 1 (n = 99); fumonisin B2 (FB2) and fumonisin B3 (FB3) were also present (≥1 ng g−1). In the second year there was only one positive sample (14 ng g−1 FB1) out of 100 analysed. All positive FB1 results were confirmed by LC-MS/MS. PMID:21623501

  1. Evaluation of Criteria for the Detection of Fires in Underground Conveyor Belt Haulageways.

    PubMed

    Litton, Charles D; Perera, Inoka Eranda

    2012-07-01

    Large-scale experiments were conducted in an above-ground gallery to simulate typical fires that develop along conveyor belt transport systems within underground coal mines. In the experiments, electrical strip heaters, imbedded ~5 cm below the top surface of a large mass of coal rubble, were used to ignite the coal, producing an open flame. The flaming coal mass subsequently ignited 1.83-meter-wide conveyor belts located approximately 0.30 m above the coal surface. Gas samples were drawn through an averaging probe located approximately 20 m downstream of the coal for continuous measurement of CO, CO 2 , and O 2 as the fire progressed through the stages of smoldering coal, flaming coal, and flaming conveyor belt. Also located approximately 20 m from the fire origin and approximately 0.5 m below the roof of the gallery were two commercially available smoke detectors, a light obscuration meter, and a sampling probe for measurement of total mass concentration of smoke particles. Located upstream of the fire origin and also along the wall of the gallery at approximately 14 m and 5 m upstream were two video cameras capable of both smoke and flame detection. During the experiments, alarm times of the smoke detectors and video cameras were measured while the smoke obscuration and total smoke mass were continually measured. Twelve large-scale experiments were conducted using three different types of fire-resistant conveyor belts and four air velocities for each belt. The air velocities spanned the range from 1.0 m/s to 6.9 m/s. The results of these experiments are compared to previous large-scale results obtained using a smaller fire gallery and much narrower (1.07-m) conveyor belts to determine if the fire detection criteria previously developed (1) remained valid for the wider conveyor belts. Although some differences between these and the previous experiments did occur, the results, in general, compare very favorably. Differences are duly noted and their impact on fire detection discussed.

  2. Major and trace element composition of copiapite-group minerals and coexisting water from the Richmond mine, Iron Mountain, California

    USGS Publications Warehouse

    Jamieson, H.E.; Robinson, C.; Alpers, Charles N.; McCleskey, R. Blaine; Nordstrom, D. Kirk; Peterson, Ronald C.

    2005-01-01

    Copiapite-group minerals of the general formula AR4 (SO4)6(OH)2??nH2O, where A is predominantly Mg, Fe2+, or 0.67Al3+, R is predominantly Fe3+, and n is typically 20, are among several secondary hydrous Fe sulfates occurring in the inactive mine workings of the massive sulfide deposit at Iron Mountain, CA, a USEPA Superfund site that produces extremely acidic drainage. Samples of copiapite-group minerals, some with coexisting water, were collected from the Richmond mine. Approximately 200 mL of brownish pore water with a pH of -0.9 were extracted through centrifugation from a 10-L sample of moist copiapite-group minerals taken from pyritic muck piles. The pore water is extremely rich in ferric iron (Fe3+=149 g L-1, FeT=162 g L-1 and has a density of 1.52 g mL-1. The composition of the pore water is interpreted in the context of published phase relations in the Fe2O3- SO3-H2O system and previous work on the chemistry of extremely acid mine waters and associated minerals in the Richmond mine. Two distinct members of the copiapite mineral group were identified in the samples with coexisting water: (1) abundant magnesiocopiapite consisting of platy crystals 10 to 50 ??m and (2) minor aluminocopiapite present as smaller platy crystals that form spheroidal aggregates. The average composition (n=5) of the magnesiocopiapite is (Mg0.90Fe0.172+ Zn0.02Cu0.01)???1.10(Fe3.833+Al0.09)???3.92(SO4) 6.00(OH)1.96??20H2O. Bulk compositions determined by digestion and wet-chemical analysis are consistent with the microanalytical results. These results suggest that magnesiocopiapite is the least soluble member of the copiapite group under the prevailing conditions. Micro-PIXE analysis indicates that the copiapite-group minerals in this sample sequester Zn (average 1420 ppm), with lesser amounts of Cu (average 270 ppm) and As (average 64 ppm). ?? 2004 Elsevier B.V. All rights reserved.

  3. Growth, chamber building rate and reproduction time of Palaeonummulites venosus (Foraminifera) under natural conditions

    NASA Astrophysics Data System (ADS)

    Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino

    2017-12-01

    We investigated the symbiont-bearing benthic foraminifer Palaeonummulites venosus to determine the chamber building rate (CBR), test diameter increase rate (DIR), reproduction time and longevity using the `natural laboratory' approach. This is based on the decomposition of monthly obtained frequency distributions of chamber number and test diameter into normally distributed components. Test measurements were taken using MicroCT. The shift of the mean and standard deviation of component parameters during the 15-month investigation period was used to calculate Michaelis-Menten functions applied to estimate the averaged CBR and DIR under natural conditions. The individual dates of birth were estimated using the inverse averaged CBR and the inverse DIR fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e., frequency divided by sediment weight) based on both CBR and DIR revealed continuous reproduction throughout the year with two peaks, a stronger one in June determined as the onset of the summer generation (generation 1) and a weaker one in November determined as the onset of the winter generation (generation 2). This reproduction scheme explains the presence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date, is approximately 1.5 yr, an estimation obtained by using both CBR and DIR.

  4. Mars Global Reference Atmospheric Model (Mars-GRAM) and Database for Mission Design

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Duvall, Aleta; Johnson, D. L.

    2003-01-01

    Mars Global Reference Atmospheric Model (Mars-GRAM 2001) is an engineering-level Mars atmosphere model widely used for many Mars mission applications. From 0-80 km, it is based on NASA Ames Mars General Circulation Model, while above 80 km it is based on Mars Thermospheric General Circulation Model. Mars-GRAM 2001 and MGCM use surface topography from Mars Global Surveyor Mars Orbiting Laser Altimeter. Validation studies are described comparing Mars-GRAM with Mars Global Surveyor Radio Science and Thermal Emission Spectrometer data. RS data from 2480 profiles were used, covering latitudes 75 deg S to 72 deg N, surface to approximately 40 km, for seasons ranging from areocentric longitude of Sun (Ls) = 70-160 deg and 265-310 deg. RS data spanned a range of local times, mostly 0-9 hours and 18-24 hours. For interests in aerocapture and precision landing, comparisons concentrated on atmospheric density. At a fixed height of 20 km, RS density varied by about a factor of 2.5 over ranges of latitudes and Ls values observed. Evaluated at matching positions and times, these figures show average RSMars-GRAM density ratios were generally 1+/-)0.05, except at heights above approximately 25 km and latitudes above approximately 50 deg N. Average standard deviation of RSMars-GRAM density ratio was 6%. TES data were used covering surface to approximately 40 km, over more than a full Mars year (February, 1999 - June, 2001, just before start of a Mars global dust storm). Depending on season, TES data covered latitudes 85 deg S to 85 deg N. Most TES data were concentrated near local times 2 hours and 14 hours. Observed average TES/Mars-GRAM density ratios were generally 1+/-0.05, except at high altitudes (15-30 km, depending on season) and high latitudes (greater than 45 deg N), or at most altitudes in the southern hemisphere at Ls approximately 90 and 180 deg. Compared to TES averages for a given latitude and season, TES data had average density standard deviation about the mean of approximately 2.5% for all data, or approximately 1-4%, depending on time of day and dust optical depth. Average standard deviation of TES/Mars-GRAM density ratio was 8.9% for local time 2 hours and 7.1% for local time 14 hours. Thus standard deviation of observed TES/Mars-GRAM density ratio, evaluated at matching positions and times, is about three times the standard deviation of TES data about the TES mean value at a given position and season.

  5. Assessing the Utilization of Total Ankle Replacement in the United States.

    PubMed

    Reddy, Sudheer; Koenig, Lane; Demiralp, Berna; Nguyen, Jennifer T; Zhang, Qian

    2017-06-01

    Total ankle arthroplasty (TAR) has been shown to be a cost-effective procedure relative to conservative management and ankle arthrodesis. Although its use has grown considerably over the last 2 decades, it is less common than arthrodesis. The purpose of this investigation was to analyze the cost and utilization of TAR across hospitals. Our analytical sample consisted of Medicare claims data from 2011 and 2012 for Inpatient Prospective Payment System hospitals. Outcome variables of interest were the likelihood of a hospital performing TAR, the volume of TAR cases, TAR hospital costs, and hospital profit margins. Data from the 2010 Cost Report and Medicare inpatient claims were utilized to compute average margins for TAR cases and overall hospital margins. TAR cost was calculated based on the all payer cost-to-charge ratio for each hospital in the Cost Report. Nationwide Inpatient Sample data were used to generate descriptive statistics on all TAR patients across payers. Medicare participants accounted for 47.5% of the overall population of TAR patients. Average implant cost was $13 034, accounting for approximately 70% of the total all-payer cost. Approximately, one-third of hospitals were profitable with respect to primary TAR. Profitable hospitals had lower total costs and higher payments leading to a difference in profit of approximately $11 000 from TAR surgeries between profitable and nonprofitable hospitals. No difference was noted with respect to length of stay or number of cases performed between profitable and nonprofitable hospitals. TAR surgeries were more likely to take place in large and major teaching hospitals. Among hospitals performing at least 1 TAR, the margin on TAR cases was positively associated with the total number of TARs performed by a hospital. There is an overall significant financial burden associated with performing TAR with many health systems failing to demonstrate profitability despite its increased utilization. While additional factors such as improved patient outcomes may be driving utilization of TAR, financial barriers may exist that can affect utilization of TAR across health systems. Level III, comparative study.

  6. Light absorption and the photoformation of hydroxyl radical and singlet oxygen in fog waters

    NASA Astrophysics Data System (ADS)

    Kaur, R.; Anastasio, C.

    2017-09-01

    The atmospheric aqueous-phase is a rich medium for chemical transformations of organic compounds, in part via photooxidants generated within the drops. Here we measure light absorption, photoformation rates and steady-state concentrations of two photooxidants - hydroxyl radical (•OH) and singlet molecular oxygen (1O2*) - in 8 illuminated fog waters from Davis, California and Baton Rouge, Louisiana. Mass absorption coefficients for dissolved organic compounds (MACDOC) in the samples are large, with typical values of 10,000-15,000 cm2 g-C-1 at 300 nm, and absorption extends to wavelengths as long as 450-600 nm. While nitrite and nitrate together account for an average of only 1% of light absorption, they account for an average of 70% of •OH photoproduction. Mean •OH photoproduction rates in fogs at the two locations are very similar, with an overall mean of 1.2 (±0.7) μM h-1 under Davis winter sunlight. The mean (±1σ) lifetime of •OH is 1.6 (±0.6) μs, likely controlled by dissolved organic compounds. Including calculated gas-to-drop partitioning of •OH, the average aqueous concentration of •OH is approximately 2 × 10-15 M (midday during Davis winter), with aqueous reactions providing approximately one-third of the hydroxyl radical source. At this concentration, calculated lifetimes of aqueous organics are on the order of 10 h for compounds with •OH rate constants of 1 × 1010 M-1 s-1 or higher (e.g., substituted phenols such as syringol (6.4 h) and guaiacol (8.4 h)), and on the order of 100 h for compounds with rate constants near 1 × 109 M-1 s-1 (e.g., isoprene oxidation products such as glyoxal (152 h), glyoxylic acid (58 h), and pyruvic acid (239 h)). Steady-state concentrations of 1O2* are approximately 100 times higher than those of •OH, in the range of (0.1-3.0) × 10-13 M. Since 1O2* is a more selective oxidant than •OH, it will only react appreciably with electron-rich species such as dimethyl furan (lifetime of 2.0 h) and substituted polycyclic aromatic hydrocarbons (e.g., 9,10-dimethylbenz[a]anthracene with a lifetime of 0.7 h). Comparing our current Davis samples with Davis fogs collected in the late 1990s shows a decrease in dissolved organic carbon content, similar mass absorption coefficients, lower •OH concentrations, but very similar 1O2* concentrations.

  7. Annual replenishment of bed material by sediment transport in the Wind River near Riverton, Wyoming

    USGS Publications Warehouse

    Smalley, M.L.; Emmett, W.W.; Wacker, A.M.

    1994-01-01

    The U.S. Geological Survey, in cooperation with the Wyoming Department of Transportation, conducted a study during 1985-87 to determine the annual replenishment of sand and gravel along a point bar in the Wind River near Riverton, Wyoming. Hydraulic- geometry relations determined from streamflow measurements; streamflow characteristics determined from 45 years of record at the study site; and analyses of suspended-sediment, bedload, and bed- material samples were used to describe river transport characteristics and to estimate the annual replenishment of sand and gravel. The Wind River is a perennial, snowmelt-fed stream. Average daily discharge at the study site is about 734 cubic feet per second, and bankfull discharge (recurrence interval about 1.5 years) is about 5,000 cubic feet per second. At bankfull discharge, the river is about 136 feet wide and has an average depth of about 5.5 feet and average velocity of about 6.7 feet per second. Streams slope is about 0.0010 foot per foot. Bed material sampled on the point bar before the 1986 high flows ranged from sand to cobbles, with a median diameter of about 22 millimeters. Data for sediment samples collected during water year 1986 were used to develop regression equations between suspended-sediment load and water discharge and between bedload and water discharge. Average annual suspended-sediment load was computed to be about 561,000 tons per year using the regression equation in combination with flow-duration data. The regression equation for estimating bedload was not used; instead, average annual bedload was computed as 1.5 percent of average annual suspended load about 8,410 tons per year. This amount of bedload material is estimated to be in temporary storage along a reach containing seven riffles--a length of approximately 1 river mile. On the basis of bedload material sampled during the 1986 high flows, about 75 percent (by weight) is sand (2 millimeters in diameter or finer); median particle size is about 0.5 milli- meter. About 20 percent (by weight) is medium gravel to small cobbles--12.7 millimeters (0.5 inch) or coarser. The bedload moves slowly (about 0.03 percent of the water speed) and briefly (about 10 percent of the time). The average travel distance of a median-sized particle is about 1 river mile per year. The study results indicate that the average replenishment rate of bedload material coarser than 12.7 millimeters is about 1,500 to 2,000 tons (less than 1,500 cubic yards) per year. Finer material (0.075 to 6.4 millimeters in diameter) is replen- ishment at about 4,500 to 5,000 cubic yards per year. The total volume of potentially usable material would average about 6,000 cubic yards per year.

  8. Preparation of 'near' homogeneous samples for the analysis of matrix-assisted laser desorption/ionisation processes

    NASA Astrophysics Data System (ADS)

    Allwood, D. A.; Perera, I. K.; Perkins, J.; Dyer, P. E.; Oldershaw, G. A.

    1996-11-01

    Highly uniform thin films of samples for matrix-assisted laser desorption/ionisation (MALDI) have been fabricated by depositing a saturated solution of ferulic acid onto a soda lime glass disc and crushing with polished aluminium, the films covering large areas of the substrate and having a thickness between 45-60 μm. The effects that different substrates and crushing materials as well as sample concentration and sample recrystallisation have on these films has been examined by scanning electron microscopy. Such films have been shown to have a lower threshold fluence for matrix ion detection than standard dried-droplet samples, the reduction being approximately 15% for three of the five matrices analysed. An explanation for this is proposed in terms of crushed samples possessing a greater average energy per unit volume coupled to them by the laser due to their improved surface uniformity. Furthermore, samples that are dried at refrigerated temperatures (˜ 2.25°C) are shown to have a much improved macroscopic uniformity over samples dried at room temperature. Refrigerated and crushed MALDI samples yield analyte ions with good spot-to-spot and pulse-to-pulse reproducibility and both preparation steps appear to improve the resolution of spectra obtained with a time-of-flight mass spectrometer.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Midura, R.J.; McQuillan, D.J.; Benham, K.J.

    The rat osteosarcoma cell line (UMR 106-01) synthesizes and secretes relatively large amounts of a sulfated glycoprotein into its culture medium (approximately 240 ng/10(6) cells/day). This glycoprotein was purified, and amino-terminal sequence analysis identified it as bone sialoprotein (BSP). (35S)Sulfate, (3H)glucosamine, and (3H)tyrosine were used as metabolic precursors to label the BSP. Sulfate esters were found on N- and O-linked oligosaccharides and on tyrosine residues, with about half of the total tyrosines in the BSP being sulfated. The proportion of 35S activity in tyrosine-O-sulfate (approximately 70%) was greater than that in N-linked (approximately 20%) and O-linked (approximately 10%) oligosaccharides. Frommore » the deduced amino acid sequence for rat BSP, the results indicate that on average approximately 12 tyrosine residues, approximately 3 N-linked, and approximately 2 O-linked oligosaccharides are sulfated/molecule. The carboxyl-terminal quarter of the BSP probably contains most, if not all, of the sulfated tyrosine residues because this region of the polypeptide contains the necessary requirements for tyrosine sulfation. Oligosaccharide analyses indicated that for every N-linked oligosaccharide on the BSP, there are also approximately 2 hexa-, approximately 5 tetra-, and approximately 2 trisaccharides O-linked to serine and threonine residues. On average, the BSP synthesized by UMR 106-01 cells would contain a total of approximately 3 N-linked and approximately 25 of the above O-linked oligosaccharides. This large number of oligosaccharides is in agreement with the known carbohydrate content (approximately 50%) of the BSP.A« less

  10. Analysis of the variation in OCT measurements of a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head, PIMD-Average [02π

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2016-03-01

    The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can be observed with a power of 0.8. This is shorter than what has been observed both for HRT measurements and automated perimetry measurements with a similar observation rate. It is concluded that PIMDAverage [02π] has the potential to detect deterioration of glaucoma quicker than currently available primary diagnostic instruments. To increase the efficiency of PIMD-Average [02π] further, the variation among visits within subject has to be reduced.

  11. Behavioral-Physiological Effects of Red Phosphorous Smoke Inhalation on Two Wildlife Species. Task 1. Inhalation Equipment Development/Ambient CO evaluation/Aerosol Distribution and Air Quality Study

    DTIC Science & Technology

    1987-12-01

    occurred in only negligible quantities. Carbo /monoxide was found-to occur in measurable amounts during practically all burns , wit average readings of from...generation. This involved a total of 64 RP/BR burns at target concentrations of 0.4, 1.5, and 3.0 mg/l and 3.0, 4.5, and 6.0 mg/l with air-flow rates of...500 and 250 I/min, respectively. Each burn lasted approximately 1 h and 45 min. Spatial uniformity of RP/BR concentra- tion was assessed by sampling

  12. Critical currents of Nb sub 3 Sn wires for the US-DPC coil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takayasu, M.; Gung, C.Y.; Steeves, M.M.

    1991-03-01

    This paper evaluates the critical current of titanium-alloyed internal-tin, jelly-roll Nb{sub 3}Sn wire for use in the US-DPC coil. It was confirmed from 14 randomly-selected samples that the critical-current values were uniform and consistent: the non-copper critical-current density was approximately 700 A/mm{sup 2} at 10 T and 4.2 K in agreement with expectations. A 27-strand cable-in-conduit conductor (CICC) using the low-thermal-coefficient-of-expansion superalloy Incoloy 905 yielded a critical current 5--7% below the average value of the single-strand data.

  13. Potency trends of delta9-THC and other cannabinoids in confiscated marijuana from 1980-1997.

    PubMed

    ElSohly, M A; Ross, S A; Mehmedic, Z; Arafat, R; Yi, B; Banahan, B F

    2000-01-01

    The analysis of 35,312 cannabis preparations confiscated in the USA over a period of 18 years for delta-9-tetrahydrocannabinol (delta9-THC) and other major cannabinoids is reported. Samples were identified as cannabis, hashish, or hash oil. Cannabis samples were further subdivided into marijuana (loose material, kilobricks and buds), sinsemilla, Thai sticks and ditchweed. The data showed that more than 82% of all confiscated samples were in the marijuana category for every year except 1980 (61%) and 1981 (75%). The potency (concentration of delta9-THC) of marijuana samples rose from less than 1.5% in 1980 to approximately 3.3% in 1983 and 1984, then fluctuated around 3% till 1992. Since 1992, the potency of confiscated marijuana samples has continuously risen, going from 3.1% in 1992 to 4.2% in 1997. The average concentration of delta9-THC in all cannabis samples showed a gradual rise from 3% in 1991 to 4.47% in 1997. Hashish and hash oil, on the other hand, showed no specific potency trends. Other major cannabinoids [cannabidiol (CBD), cannabinol (CBN), and cannabichromene (CBC)] showed no significant change in their concentration over the years.

  14. Programmable noise bandwidth reduction by means of digital averaging

    NASA Technical Reports Server (NTRS)

    Poklemba, John J. (Inventor)

    1993-01-01

    Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.

  15. Samples in applied psychology: over a decade of research in review.

    PubMed

    Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S

    2011-09-01

    This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved

  16. keV-Scale sterile neutrino sensitivity estimation with time-of-flight spectroscopy in KATRIN using self-consistent approximate Monte Carlo

    NASA Astrophysics Data System (ADS)

    Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian

    2018-03-01

    We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.

  17. Studies of porous anodic alumina using spin echo scattering angle measurement

    NASA Astrophysics Data System (ADS)

    Stonaha, Paul

    The properties of a neutron make it a useful tool for use in scattering experiments. We have developed a method, dubbed SESAME, in which specially designed magnetic fields encode the scattering signal of a neutron beam into the beam's average Larmor phase. A geometry is presented that delivers the correct Larmor phase (to first order), and it is shown that reasonable variations of the geometry do not significantly affect the net Larmor phase. The solenoids are designed using an analytic approximation. Comparison of this approximate function with finite element calculations and Hall probe measurements confirm its validity, allowing for fast computation of the magnetic fields. The coils were built and tested in-house on the NBL-4 instrument, a polarized neutron reflectometer whose construction is another major portion of this work. Neutron scattering experiments using the solenoids are presented, and the scattering signal from porous anodic alumina is investigated in detail. A model using the Born Approximation is developed and compared against the scattering measurements. Using the model, we define the necessary degree of alignment of such samples in a SESAME measurement, and we show how the signal retrieved using SESAME is sensitive to range of detectable momentum transfer.

  18. Enrichment of Thorium (Th) and Lead (Pb) in the early Galaxy

    NASA Astrophysics Data System (ADS)

    Aoki, Wako; Honda, Satoshi

    2010-03-01

    We have been determining abundances of Th, Pb and other neutron-capture elements in metal-deficient cool giant stars to constrain the enrichment of heavy elements by the r- and s-processes. Our current sample covers the metallicity range between [Fe/H] = -2.5 and -1.0. (1) The abundance ratios of Pb/Fe and Pb/Eu of most of our stars are approximately constant, and no increase of these ratios with increasing metallicity is found. This result suggests that the Pb abundances of our sample are determined by the r-process with no or little contribution of the s-process. (2) The Th/Eu abundance ratios of our sample show no significant scatter, and the average is lower by 0.2 dex in the logarithmic scale than the solar-system value. This result indicates that the actinides production by the r-process does not show large dispersion, even though r-process models suggest high sensitivity of the actinides production to the nucleosynthesis environment.

  19. The Fourth SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-4)

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B.; Thomas, Crystal S.; van Heukelem, Laurie; Schlueter, louise; Russ, Mary E.; Ras, Josephine; Claustre, Herve; Clementson, Lesley; Canuti, Elisabetta; Berthon, Jean-Francois; hide

    2010-01-01

    Ten international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a mixed pigment sample. Although prior Sea-viewing Wide Field-of-view Sensor (SeaWiFS) High Performance Liquid Chromatography (HPLC) Round-Robin Experiment (SeaHARRE) activities conducted in open-ocean waters covered a wide dynamic range in productivity, and some of the samples were collected in the coastal zone, none of the activities involved exclusively coastal samples. Consequently, SeaHARRE-4 was organized and executed as a strictly coastal activity and the field samples were collected from primarily eutrophic waters within the coastal zone of Denmark. The more restrictive perspective limited the dynamic range in chlorophyll concentration to approximately one and a half orders of magnitude (previous activities covered more than two orders of magnitude). The method intercomparisons were used for the following objectives: a) estimate the uncertainties in quantitating individual pigments and higher-order variables formed from sums and ratios; b) confirm if the chlorophyll a accuracy requirements for ocean color validation activities (approximately 25%, although 15% would allow for algorithm refinement) can be met in coastal waters; c) establish the reduction in uncertainties as a result of applying QA procedures; d) show the importance of establishing a properly defined referencing system in the computation of uncertainties; e) quantify the analytical benefits of performance metrics, and f) demonstrate the utility of a laboratory mix in understanding method performance. In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied.

  20. Correspondence of verbal descriptor and numeric rating scales for pain intensity: an item response theory calibration.

    PubMed

    Edelen, Maria Orlando; Saliba, Debra

    2010-07-01

    Assessing pain intensity in older adults is critical and challenging. There is debate about the most effective way to ask older adults to describe their pain severity, and clinicians vary in their preferred approaches, making comparison of pain intensity scores across settings difficult. A total of 3,676 residents from 71 community nursing homes across eight states were asked about pain presence. The 1,960 residents who reported pain within the past 5 days (53% of total, 70% female; age: M = 77.9, SD = 12.4) were included in analyses. Those who reported pain were also asked to provide a rating of pain intensity using either a verbal descriptor scale (VDS; mild, moderate, severe, and very severe and horrible), a numeric rating scale (NRS; 0 = no pain to 10 = worst pain imaginable), or both. We used item response theory (IRT) methods to identify the correspondence between the VDS and the NRS response options by estimating item parameters for these and five additional pain items. The sample reported moderate amounts of pain on average. Examination of the IRT location parameters for the pain intensity items indicated the following approximate correspondence: VDS mild approximately NRS 1-4, VDS moderate approximately NRS 5-7, VDS severe approximately NRS 8-9, and VDS very severe, horrible approximately NRS 10. This IRT calibration provides a crosswalk between the two response scales so that either can be used in practice depending on the preference of the clinician and respondent.

  1. 77 FR 22615 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-16

    .... The Commission estimates that approximately 209 broker-dealers will spend an average of 87 hours annually to comply with this rule. Thus, the total compliance burden is approximately 18,200 burden-hours...

  2. 77 FR 29394 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-17

    .... The Commission estimates that approximately 209 broker-dealers will spend an average of 87 hours annually to comply with the rule. Thus, the total compliance burden is approximately 18,183 burden-hours...

  3. Drying of Floodplain Forests Associated with Water-Level Decline in the Apalachicola River, Florida - Interim Results, 2006

    USGS Publications Warehouse

    Darst, Melanie R.; Light, Helen M.

    2007-01-01

    Floodplain forests of the Apalachicola River, Florida, are drier in composition today (2006) than they were before 1954, and drying is expected to continue for at least the next 50 years. Drier forest composition is probably caused by water-level declines that occurred as a result of physical changes in the main channel after 1954 and decreased flows in spring and summer months since the 1970s. Forest plots sampled from 2004 to 2006 were compared to forests sampled in the late 1970s (1976-79) using a Floodplain Index (FI) based on species dominance weighted by the Floodplain Species Category, a value that represents the tolerance of tree species to inundation and saturation in the floodplain and consequently, the typical historic floodplain habitat for that species. Two types of analyses were used to determine forest changes over time: replicate plot analysis comparing present (2004-06) canopy composition to late 1970s canopy composition at the same locations, and analyses comparing the composition of size classes of trees on plots in late 1970s and in present forests. An example of a size class analysis would be a comparison of the composition of the entire canopy (all trees greater than 7.5 cm (centimeter) diameter at breast height (dbh)) to the composition of the large canopy tree size class (greater than or equal to 25 cm dbh) at one location. The entire canopy, which has a mixture of both young and old trees, is probably indicative of more recent hydrologic conditions than the large canopy, which is assumed to have fewer young trees. Change in forest composition from the pre-1954 period to approximately 2050 was estimated by combining results from three analyses. The composition of pre-1954 forests was represented by the large canopy size class sampled in the late 1970s. The average FI for canopy trees was 3.0 percent drier than the average FI for the large canopy tree size class, indicating that the late 1970s forests were 3.0 percent drier than pre-1954 forests. The change from the late 1970s to the present was based on replicate plot analysis. The composition of 71 replicate plots sampled from 2004 to 2006 averaged 4.4 percent drier than forests sampled in the late 1970s. The potential composition of future forests (2050 or later) was estimated from the composition of the present subcanopy tree size class (less than 7.5 cm and greater than or equal to 2.5 cm dbh), which contains the greatest percentage of young trees and is indicative of recent hydrologic conditions. Subcanopy trees are the driest size class in present forests, with FIs averaging 31.0 percent drier than FIs for all canopy trees. Based on results from all three sets of data, present floodplain forests average 7.4 percent drier in composition than pre-1954 forests and have the potential to become at least 31.0 percent drier in the future. An overall total change in floodplain forests to an average composition 38.4 percent drier than pre-1954 forests is expected within approximately 50 years. The greatest effects of water-level decline have occurred in tupelo-cypress swamps where forest composition has become at least 8.8 percent drier in 2004-06 than in pre-1954 years. This change indicates that a net loss of swamps has already occurred in the Apalachicola River floodplain, and further losses are expected to continue over the next 50 years. Drying of floodplain forests will result in some low bottomland hardwood forests changing in composition to high bottomland hardwood forests. The composition of high bottomland hardwoods will also change, although periodic flooding is still occurring and will continue to limit most of the floodplain to bottomland hardwood species that are adapted to at least short periods of inundation and saturation.

  4. Chip-LC-MS for label-free profiling of human serum.

    PubMed

    Horvatovich, Peter; Govorukhina, Natalia I; Reijmers, Theo H; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer

    2007-12-01

    The discovery of biomarkers in easily accessible body fluids such as serum is one of the most challenging topics in proteomics requiring highly efficient separation and detection methodologies. Here, we present the application of a microfluidics-based LC-MS system (chip-LC-MS) to the label-free profiling of immunodepleted, trypsin-digested serum in comparison to conventional capillary LC-MS (cap-LC-MS). Both systems proved to have a repeatability of approximately 20% RSD for peak area, all sample preparation steps included, while repeatability of the LC-MS part by itself was less than 10% RSD for the chip-LC-MS system. Importantly, the chip-LC-MS system had a two times higher resolution in the LC dimension and resulted in a lower average charge state of the tryptic peptide ions generated in the ESI interface when compared to cap-LC-MS while requiring approximately 30 times less (~5 pmol) sample. In order to characterize both systems for their capability to find discriminating peptides in trypsin-digested serum samples, five out of ten individually prepared, identical sera were spiked with horse heart cytochrome c. A comprehensive data processing methodology was applied including 2-D smoothing, resolution reduction, peak picking, time alignment, and matching of the individual peak lists to create an aligned peak matrix amenable for statistical analysis. Statistical analysis by supervised classification and variable selection showed that both LC-MS systems could discriminate the two sample groups. However, the chip-LC-MS system allowed to assign 55% of the overall signal to selected peaks against 32% for the cap-LC-MS system.

  5. Role of intestinal microbiota in transformation of bismuth and other metals and metalloids into volatile methyl and hydride derivatives in humans and mice.

    PubMed

    Michalke, Klaus; Schmidt, Annette; Huber, Britta; Meyer, Jörg; Sulkowski, Margareta; Hirner, Alfred V; Boertz, Jens; Mosel, Frank; Dammann, Philip; Hilken, Gero; Hedrich, Hans J; Dorsch, Martina; Rettenmeier, Albert W; Hensel, Reinhard

    2008-05-01

    The present study shows that feces samples of 14 human volunteers and isolated gut segments of mice (small intestine, cecum, and large intestine) are able to transform metals and metalloids into volatile derivatives ex situ during anaerobic incubation at 37 degrees C and neutral pH. Human feces and the gut of mice exhibit highly productive mechanisms for the formation of the toxic volatile derivative trimethylbismuth [(CH(3))(3)Bi] at rather low concentrations of bismuth (0.2 to 1 mumol kg(-1) [dry weight]). An increase of bismuth up to 2 to 14 mmol kg(-1) (dry weight) upon a single (human volunteers) or continuous (mouse study) administration of colloidal bismuth subcitrate resulted in an average increase of the derivatization rate from approximately 4 pmol h(-1) kg(-1) (dry weight) to 2,100 pmol h(-1) kg(-1) (dry weight) in human feces samples and from approximately 5 pmol h(-1) kg(-1) (dry weight) to 120 pmol h(-1) kg(-1) (dry weight) in mouse gut samples, respectively. The upshift of the bismuth content also led to an increase of derivatives of other elements (such as arsenic, antimony, and lead in human feces or tellurium and lead in the murine large intestine). The assumption that the gut microbiota plays a dominant role for these transformation processes, as indicated by the production of volatile derivatives of various elements in feces samples, is supported by the observation that the gut segments of germfree mice are unable to transform administered bismuth to (CH(3))(3)Bi.

  6. Force-momentum-based self-guided Langevin dynamics: A rapid sampling method that approaches the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Wu, Xiongwu; Brooks, Bernard R.

    2011-11-01

    The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.

  7. Emission of 2-methyl-3-buten-2-ol by pines: A potentially large natural source of reactive carbon to the atmosphere

    NASA Astrophysics Data System (ADS)

    Harley, Peter; Fridd-Stroud, Verity; Greenberg, James; Guenther, Alex; Vasconcellos, PéRola

    1998-10-01

    High rates of emission of 2-methyl-3-buten-2-ol (MBO) were measured from needles of several pine species. Emissions of MBO in the light were 1 to 2 orders of magnitude higher than emissions of monoterpenes and, in contrast to monoterpene emissions from pines, were absent in the dark. MBO emissions were strongly dependent on incident light, behaving similarly to net photosynthesis. Emission rates of MBO increased exponentially with temperature up to approximately 35°C. Above approximately 42°C, emission rates declined rapidly. Emissions could be modeled using existing algorithms for isoprene emission. We propose that emissions of MBO from lodgepole and ponderosa pine are the primary source of high concentrations of this compound, averaging 1-3 ppbv, found in ambient air samples collected in Colorado at an isolated mountain site approximately 3050 m above sea level. Subsequent field studies in a ponderosa pine plantation in California confirmed high MBO emissions, which averaged 25 μg C g-1 h-1 for 1-year-old needles, corrected to 30°C and photon flux of 1000 μmol m-2 s-1. A total of 34 pine species growing at Eddy Arboretum in Placerville, California, were investigated, of which 11 exhibited high emissions of MBO (>5 μg C g-1 h-1), and 6 emitted small but detectable amounts. All the emitting species are of North American origin, and most are restricted to western North America. These results indicate that MBO emissions from pines may constitute a significant source of reactive carbon and a significant source of acetone, to the atmosphere, particularly in the western United States.

  8. Investigating organic matter in Fanno Creek, Oregon, Part 2 of 3: sources, sinks, and transport of organic matter with fine sediment

    USGS Publications Warehouse

    Keith, Mackenzie K.; Sobieszczyk, Steven; Goldman, Jami H.; Rounds, Stewart A.

    2014-01-01

    Organic matter (OM) is abundant in Fanno Creek, Oregon, USA, and has been tied to a variety of water-quality concerns, including periods of low dissolved oxygen downstream in the Tualatin River, Oregon. The key sources of OM in Fanno Creek and other Tualatin River tributaries have not been fully identified, although isotopic analyses from previous studies indicated a predominantly terrestrial source. This study investigates the role of fine sediment erosion and deposition (mechanisms and spatial patterns) in relation to OM transport. Geomorphic mapping within the Fanno Creek floodplain shows that a large portion (approximately 70%) of the banks are eroding or subject to erosion, likely as a result of the imbalance caused by anthropogenic alteration. Field measurements of long- and short-term bank erosion average 4.2 cm/year and average measurements of deposition for the watershed are 4.8 cm/year. The balance between average annual erosion and deposition indicates an export of 3,250 metric tons (tonnes, t) of fine sediment to the Tualatin River—about twice the average annual export of 1,880 t of sediment at a location 2.4 km from the creek’s mouth calculated from suspended sediment load regressions from continuous turbidity data and suspended sediment samples. Carbon content from field samples of bank material, combined with fine sediment export rates, indicates that about 29–67 t of carbon, or about 49–116 t of OM, from bank sediment may be exported to the Tualatin River from Fanno Creek annually, an estimate that is a lower bound because it does not account for the mass wasting of organic-rich O and A soil horizons that enter the stream.

  9. Simulated-use validation of a sponge ATP method for determining the adequacy of manual cleaning of endoscope channels.

    PubMed

    Alfa, Michelle J; Olson, Nancy

    2016-05-04

    The objective of this study was to validate the relative light unit (RLU) cut-off of adequate cleaning of flexible colonoscopes for an ATP (adenosine tri-phosphate) test kit that used a sponge channel collection method. This was a simulated-use study. The instrument channel segment of a flexible colonoscope was soiled with ATS (artificial test soil) containing approximately 8 Log10 Enterococcus faecalis and Pseudomonas aeruginosa/mL. Full cleaning, partial cleaning and no cleaning were evaluated for ATP, protein and bacterial residuals. Channel samples were collected using a sponge device to assess residual RLUs. Parallel colonoscopes inoculated and cleaned in the same manner were sampled using the flush method to quantitatively assess protein and bacterial residuals. The protein and viable count benchmarks for adequate cleaning were <6.4 ug/cm(2) and <4 Log10 cfu/cm(2). The negative controls for the instrument channel, over the course of the study remained low with on average 14 RLUs, 0.04 ug/cm(2) protein and 0.025 Log10 cfu/cm(2). Partial cleaning resulted in an average of 6601 RLUs, 3.99 ug/cm(2), 5.25 Log10 cfu/cm(2) E. faecalis and 4.48 Log10 cfu/cm(2) P. aeruginosa. After full cleaning, the average RLU was 29 (range 7-71 RLUs) and the average protein, E. faecalis and P. aeruginosa residuals were 0.23 ug/cm(2), 0.79 and 1.61 Log10 cfu/cm(2), respectively. The validated cut-off for acceptable manual cleaning was set at ≤100 RLUs for the sponge collected channel ATP test kit.

  10. Investigating organic matter in Fanno Creek, Oregon, Part 2 of 3: Sources, sinks, and transport of organic matter with fine sediment

    NASA Astrophysics Data System (ADS)

    Keith, Mackenzie K.; Sobieszczyk, Steven; Goldman, Jami H.; Rounds, Stewart A.

    2014-11-01

    Organic matter (OM) is abundant in Fanno Creek, Oregon, USA, and has been tied to a variety of water-quality concerns, including periods of low dissolved oxygen downstream in the Tualatin River, Oregon. The key sources of OM in Fanno Creek and other Tualatin River tributaries have not been fully identified, although isotopic analyses from previous studies indicated a predominantly terrestrial source. This study investigates the role of fine sediment erosion and deposition (mechanisms and spatial patterns) in relation to OM transport. Geomorphic mapping within the Fanno Creek floodplain shows that a large portion (approximately 70%) of the banks are eroding or subject to erosion, likely as a result of the imbalance caused by anthropogenic alteration. Field measurements of long- and short-term bank erosion average 4.2 cm/year and average measurements of deposition for the watershed are 4.8 cm/year. The balance between average annual erosion and deposition indicates an export of 3,250 metric tons (tonnes, t) of fine sediment to the Tualatin River-about twice the average annual export of 1,880 t of sediment at a location 2.4 km from the creek's mouth calculated from suspended sediment load regressions from continuous turbidity data and suspended sediment samples. Carbon content from field samples of bank material, combined with fine sediment export rates, indicates that about 29-67 t of carbon, or about 49-116 t of OM, from bank sediment may be exported to the Tualatin River from Fanno Creek annually, an estimate that is a lower bound because it does not account for the mass wasting of organic-rich O and A soil horizons that enter the stream.

  11. Factors associated with contraceptive ideation among urban men in Nigeria.

    PubMed

    Babalola, Stella; Kusemiju, Bola; Calhoun, Lisa; Corroon, Meghan; Ajao, Bolanle

    2015-08-01

    To determine factors influencing the readiness of urban Nigerian men to adopt contraceptive methods. The data were derived from a cross-sectional household survey conducted in Ibadan and Kaduna between September and November 2012. The sample included 2358 men from both cities. An ideation framework was constructed and a multilevel analysis performed to identify factors associated with positive thinking about contraception. Correlates of ideation operated at the individual, household, and community levels. There is considerable cluster-level variability in ideation score. The key correlates included exposure to family planning promotion campaigns, education, age, religion, marital status, and community norms. Compared with no education, high education is associated with an approximately 6.7-point increase in ideation score (P<0.001). Men with a high level of NURHI program exposure had an average ideation score that was about 3.4 points higher than for their peers with no exposure (P<0.001). The ideation score for Muslims was lower by approximately 1.7 points, on average, than for Christians (P<0.001). A comprehensive strategy of communication and behavior change activities surrounding contraceptive use should be tailored to meet the needs of specific groups of men. Community-level interventions designed to mobilize community members and change social norms that hinder the spread of ideational characteristics that favor contraceptive use should be part of this comprehensive strategy. Copyright © 2015. Published by Elsevier Ireland Ltd.

  12. Oxyanion flux characterization using passive flux meters: development and field testing of surfactant-modified granular activated carbon.

    PubMed

    Lee, Jimi; Rao, P S C; Poyer, Irene C; Toole, Robyn M; Annable, M D; Hatfield, K

    2007-07-17

    We report here on the extension of Passive Flux Meter (PFM) applications for measuring fluxes of oxyanions in groundwater, and present results for laboratory and field studies. Granular activated carbon, with and without impregnated silver (GAC and SI-GAC, respectively), was modified with a cationic surfactant, hexadecyltrimethylammonium (HDTMA), to enhance the anion exchange capacity (AEC). Langmuir isotherm sorption maxima for oxyanions measured in batch experiments were in the following order: perchlorate>chromate>selenate, consistent with their selectivity. Linear sorption isotherms for several alcohols suggest that surfactant modification of GAC and SI-GAC reduced (approximately 30-45%) sorption of alcohols by GAC. Water and oxyanion fluxes (perchlorate and chromate) measured by deploying PFMs packed with surfactant-modified GAC (SM-GAC) or surfactant-modified, silver-impregnated GAC (SM-SI-GAC) in laboratory flow chambers were in close agreement with the imposed fluxes. The use of SM-SI-GAC as a PFM sorbent was evaluated at a field site with perchlorate contamination of a shallow unconfined aquifer. PFMs packed with SM-SI-GAC were deployed in three existing monitoring wells with a perchlorate concentration range of approximately 2.5 to 190 mg/L. PFM-measured, depth-averaged, groundwater fluxes ranged from 1.8 to 7.6 cm/day, while depth-averaged perchlorate fluxes varied from 0.22 to 1.7 g/m2/day. Groundwater and perchlorate flux distributions measured in two PFM deployments closely matched each other. Depth-averaged Darcy fluxes measured with PFMs were in line with an estimate from a borehole dilution test, but much smaller than those based on hydraulic conductivity and head gradients; this is likely due to flow divergence caused by well-screen clogging. Flux-averaged perchlorate concentrations measured with PFM deployments matched concentrations in groundwater samples taken from one well, but not in two other wells, pointing to the need for additional field testing. Use of the surfactant-modified GACs for measuring fluxes of other anions of environmental interest is discussed.

  13. Weak limit of the three-state quantum walk on the line

    NASA Astrophysics Data System (ADS)

    Falkner, Stefan; Boettcher, Stefan

    2014-07-01

    We revisit the one-dimensional discrete time quantum walk with three states and the Grover coin, the simplest model that exhibits localization in a quantum walk. We derive analytic expressions for the localization and a long-time approximation for the entire probability density function (PDF). We find the possibility for asymmetric localization to the extreme that it vanishes completely on one site of the initial conditions. We also connect the time-averaged approximation of the PDF found by Inui et al. [Phys. Rev. E 72, 056112 (2005), 10.1103/PhysRevE.72.056112] to a spatial average of the walk. We show that this smoothed approximation predicts moments of the real PDF accurately.

  14. Recruitment for Occupational Research: Using Injured Workers as the Point of Entry into Workplaces

    PubMed Central

    Koehoorn, Mieke; Trask, Catherine M.; Teschke, Kay

    2013-01-01

    Objective To investigate the feasibility, costs and sample representativeness of a recruitment method that used workers with back injuries as the point of entry into diverse working environments. Methods Workers' compensation claims were used to randomly sample workers from five heavy industries and to recruit their employers for ergonomic assessments of the injured worker and up to 2 co-workers. Results The final study sample included 54 workers from the workers’ compensation registry and 72 co-workers. This sample of 126 workers was based on an initial random sample of 822 workers with a compensation claim, or a ratio of 1 recruited worker to approximately 7 sampled workers. The average recruitment cost was CND$262/injured worker and CND$240/participating worksite including co-workers. The sample was representative of the heavy industry workforce, and was successful in recruiting the self-employed (8.2%), workers from small employers (<20 workers, 38.7%), and workers from diverse working environments (49 worksites, 29 worksite types, and 51 occupations). Conclusions The recruitment rate was low but the cost per participant reasonable and the sample representative of workers in small worksites. Small worksites represent a significant portion of the workforce but are typically underrepresented in occupational research despite having distinct working conditions, exposures and health risks worthy of investigation. PMID:23826387

  15. Evaluation of kinetic uncertainty in numerical models of petroleum generation

    USGS Publications Warehouse

    Peters, K.E.; Walters, C.C.; Mankiewicz, P.J.

    2006-01-01

    Oil-prone marine petroleum source rocks contain type I or type II kerogen having Rock-Eval pyrolysis hydrogen indices greater than 600 or 300-600 mg hydrocarbon/g total organic carbon (HI, mg HC/g TOC), respectively. Samples from 29 marine source rocks worldwide that contain mainly type II kerogen (HI = 230-786 mg HC/g TOC) were subjected to open-system programmed pyrolysis to determine the activation energy distributions for petroleum generation. Assuming a burial heating rate of 1??C/m.y. for each measured activation energy distribution, the calculated average temperature for 50% fractional conversion of the kerogen in the samples to petroleum is approximately 136 ?? 7??C, but the range spans about 30??C (???121-151??C). Fifty-two outcrop samples of thermally immature Jurassic Oxford Clay Formation were collected from five locations in the United Kingdom to determine the variations of kinetic response for one source rock unit. The samples contain mainly type I or type II kerogens (HI = 230-774 mg HC/g TOC). At a heating rate of 1??C/m.y., the calculated temperatures for 50% fractional conversion of the Oxford Clay kerogens to petroleum differ by as much as 23??C (127-150??C). The data indicate that kerogen type, as defined by hydrogen index, is not systematically linked to kinetic response, and that default kinetics for the thermal decomposition of type I or type II kerogen can introduce unacceptable errors into numerical simulations. Furthermore, custom kinetics based on one or a few samples may be inadequate to account for variations in organofacies within a source rock. We propose three methods to evaluate the uncertainty contributed by kerogen kinetics to numerical simulations: (1) use the average kinetic distribution for multiple samples of source rock and the standard deviation for each activation energy in that distribution; (2) use source rock kinetics determined at several locations to describe different parts of the study area; and (3) use a weighted-average method that combines kinetics for samples from different locations in the source rock unit by giving the activation energy distribution for each sample a weight proportional to its Rock-Eval pyrolysis S2 yield (hydrocarbons generated by pyrolytic degradation of organic matter). Copyright ?? 2006. The American Association of Petroleum Geologists. All rights reserved.

  16. Rapid evaluation of high-performance systems

    NASA Astrophysics Data System (ADS)

    Forbes, G. W.; Ruoff, J.

    2017-11-01

    System assessment for design often involves averages, such as rms wavefront error, that are estimated by ray tracing through a sample of points within the pupil. Novel general-purpose sampling and weighting schemes are presented and it is also shown that optical design can benefit from tailored versions of these schemes. It turns out that the type of Gaussian quadrature that has long been recognized for efficiency in this domain requires about 40-50% more ray tracing to attain comparable accuracy to generic versions of the new schemes. Even greater efficiency gains can be won, however, by tailoring such sampling schemes to the optical context where azimuthal variation in the wavefront is generally weaker than the radial variation. These new schemes are special cases of what is known in the mathematical world as cubature. Our initial results also led to the consideration of simpler sampling configurations that approximate the newfound cubature schemes. We report on the practical application of a selection of such schemes and make observations that aid in the discovery of novel cubature schemes relevant to optical design of systems with circular pupils.

  17. The soft x-ray properties of a complete sample of optically selected quasars. 1: First results

    NASA Technical Reports Server (NTRS)

    Laor, Ari; Fiore, Fabrizio; Elvis, Martin; Wilkes, Belinda J.; Mcdowell, Jonathan C.

    1994-01-01

    We present the results of ROSAT position sensitive proportional counter (PSPC) observations of 10 quasars. These objects are part of our ROSAT program to observe a complete sample of optically selected quasars. This sample includes all 23 quasars from the bright quasar survey with a redshift z less than or = 0.400 and a Galactic H I column density N(sup Gal sub H I) less than 1.9 x 10(exp 20)/sq cm. These selection criteria, combined with the high sensitivity and improved energy resolution of the PSPC, allow us to determine the soft (approximately 0.2-2 keV) X-ray spectra of quasars with about an order of magnitude higher precision compared with earlier soft X-ray observations. The following main results are obtained: Strong correlations are suggested between the soft X-ray spectral slope alpha(sub x) and the following emission line parameters: H beta Full Width at Half Maximum (FWHM), L(sub O III), and the Fe II/H beta flux ratio. These correlations imply the following: (1) The quasar's environment is likely to be optically thin down to approximately 0.2 keV. (2) In most objects alpha(sub x) varies by less than approximately 10% on timescales shorter than a few years. (3) alpha(sub x) might be a useful absolute luminosity indicator in quasars. (4) The Galactic He I and H I column densities are well correlated. Most spectra are well characterized by a simple power law, with no evidence for either significant absorption excess or emission excess at low energies, to within approximately 30%. We find mean value of alpha(sub x) = -1.50 +/- 0.40, which is consistent with other ROSAT observations of quasars. However, this average is significantly steeper than suggested by earlier soft X-ray observations of the Einstein IPC. The 0.3 keV flux in our sample can be predicted to better than a factor of 2 once the 1.69 micrometer(s) flux is given. This implies that the X-ray variability power spectra of quasars flattens out between f approximately 10(exp -5) and f approximately 10(exp -8) Hz. A steep alpha(sub x) is mostly associated with a weak hard X-ray component, relative to the near-IR and optical emission, rather than a strong soft excess, and the scatter in the normalized 0.3 keV flux is significantly smaller than the scatter in the normalized 2 keV flux. This argues against either thin or thick accretion disks as the origin of the soft X-ray emission. Further possible implications of the results found here are briefly discussed.

  18. States' Average College Tuition.

    ERIC Educational Resources Information Center

    Eglin, Joseph J., Jr.; And Others

    This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…

  19. Impact of Satellite Viewing-Swath Width on Global and Regional Aerosol Optical Thickness Statistics and Trends

    NASA Technical Reports Server (NTRS)

    Colarco, P. R.; Kahn, R. A.; Remer, L. A.; Levy, R. C.

    2014-01-01

    We use the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite aerosol optical thickness (AOT) product to assess the impact of reduced swath width on global and regional AOT statistics and trends. Alongtrack and across-track sampling strategies are employed, in which the full MODIS data set is sub-sampled with various narrow-swath (approximately 400-800 km) and single pixel width (approximately 10 km) configurations. Although view-angle artifacts in the MODIS AOT retrieval confound direct comparisons between averages derived from different sub-samples, careful analysis shows that with many portions of the Earth essentially unobserved, spatial sampling introduces uncertainty in the derived seasonal-regional mean AOT. These AOT spatial sampling artifacts comprise up to 60%of the full-swath AOT value under moderate aerosol loading, and can be as large as 0.1 in some regions under high aerosol loading. Compared to full-swath observations, narrower swath and single pixel width sampling exhibits a reduced ability to detect AOT trends with statistical significance. On the other hand, estimates of the global, annual mean AOT do not vary significantly from the full-swath values as spatial sampling is reduced. Aggregation of the MODIS data at coarse grid scales (10 deg) shows consistency in the aerosol trends across sampling strategies, with increased statistical confidence, but quantitative errors in the derived trends are found even for the full-swath data when compared to high spatial resolution (0.5 deg) aggregations. Using results of a model-derived aerosol reanalysis, we find consistency in our conclusions about a seasonal-regional spatial sampling artifact in AOT Furthermore, the model shows that reduced spatial sampling can amount to uncertainty in computed shortwave top-ofatmosphere aerosol radiative forcing of 2-3 W m(sup-2). These artifacts are lower bounds, as possibly other unconsidered sampling strategies would perform less well. These results suggest that future aerosol satellite missions having significantly less than full-swath viewing are unlikely to sample the true AOT distribution well enough to obtain the statistics needed to reduce uncertainty in aerosol direct forcing of climate.

  20. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  1. Incidence of depression and anxiety: the Stirling County Study.

    PubMed Central

    Murphy, J M; Olivier, D C; Monson, R R; Sobol, A M; Leighton, A H

    1988-01-01

    Prevalence studies in psychiatric epidemiology out-number incidence investigations by a wide margin. This report gives descriptive information about the incidence of depression and anxiety disorders in a general population. Using data gathered in a 16-year follow-up of an adult sample selected as part of the Stirling County Study (Canada), the incidence of these types of disorders was found to be approximately nine cases per 1,000 persons per year. The data suggest that for every man who became ill for the first time with one of these disorders, three women became ill. Incidence tended to be higher among relatively young persons. These incidence rates are consistent with prevalence rates of approximately 10 per cent to 15 per cent for depression and anxiety disorders aggregated together, given an estimated average duration of illness of about 10 years. It is concluded that these incidence rates are fairly realistic in view of evidence that disorders of these types tend to be chronic. PMID:3258479

  2. Serological evidence of arboviral infection and self-reported febrile illness among U.S. troops deployed to Al Asad, Iraq.

    PubMed

    Riddle, M S; Althoff, J M; Earhart, K; Monteville, M R; Yingst, S L; Mohareb, E W; Putnam, S D; Sanders, J W

    2008-05-01

    Understanding the epidemiology of current health threats to deployed U.S. troops is important for medical assessment and planning. As part of a 2004 study among U.S. military personnel deployed to Al Asad Air Base, in the western Anbar Province of Iraq, over 500 subjects were enrolled, provided a blood specimen, and completed a questionnaire regarding history of febrile illness during this deployment (average approximately 4 months in country). This mid-deployment serum was compared to pre-deployment samples (collected approximately 3 months prior to deployment) and evaluated for seroconversion to a select panel of regional arboviral pathogens. At least one episode of febrile illness was reported in 84/504 (17%) of the troops surveyed. Seroconversion was documented in nine (2%) of deployed forces tested, with no association to febrile illness. Self-reported febrile illness was uncommon although often debilitating, and the risk of illness due to arbovirus infections was relatively low.

  3. Effect of High Energy Radiation on Mechanical Properties of Graphite Fiber Reinforced Composites. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Naranong, N.

    1980-01-01

    The flexural strength and average modulus of graphite fiber reinforced composites were tested before and after exposure to 0.5 Mev electron radiation and 1.33 Mev gamma radiation by using a three point bending test (ASTM D-790). The irradiation was conducted on vacuum treated samples. Graphite fiber/epoxy (T300/5208), graphite fiber/polyimide (C6000/PMR 15) and graphite fiber/polysulfone (C6000/P1700) composites after being irradiated with 0.5 Mev electron radiation in vacuum up to 5000 Mrad, show increases in stress and modulus of approximately 12% compared with the controls. Graphite fiber/epoxy (T300/5208 and AS/3501-6), after being irradiated with 1.33 Mev gamma radiation up to 360 Mrads, show increases in stress and modulus of approximately 6% at 167 Mrad compared with the controls. Results suggest that the graphite fiber composites studied should withstand the high energy radiation in a space environment for a considerable time, e.g., over 30 years.

  4. 77 FR 6151 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-07

    ... approximately 209 broker-dealers will spend an average of 87 hours annually to comply with this rule. Thus, the total compliance burden is approximately 18,200 burden-hours per year. Written comments are invited on...

  5. Application of fecal near-infrared spectroscopy and nutritional balance software to monitor diet quality and body condition in beef cows grazing Arizona rangeland.

    PubMed

    Tolleson, D R; Schafer, D W

    2014-01-01

    Monitoring the nutritional status of range cows is difficult. Near-infrared spectroscopy (NIRS) of feces has been used to predict diet quality in cattle. When fecal NIRS is coupled with decision support software such as the Nutritional Balance Analyzer (NUTBAL PRO), nutritional status and animal performance can be monitored. Approximately 120 Hereford and 90 CGC composite (50% Red Angus, 25% Tarentaise, and 25% Charolais) cows grazing in a single herd were used in a study to determine the ability of fecal NIRS and NutbalPro to project BCS (1 = thin and 9 = fat) under commercial scale rangeland conditions in central Arizona. Cattle were rotated across the 31,000 ha allotment at 10 to 20 d intervals. Cattle BCS and fecal samples (approximately 500 g) composited from 5 to 10 cows were collected in the pasture approximately monthly at the midpoint of each grazing period. Samples were frozen and later analyzed by NIRS for prediction of diet crude protein (CP) and digestible organic matter (DOM). Along with fecal NIRS predicted diet quality, animal breed type, reproductive status, and environmental conditions were input to the software for each fecal sampling and BCS date. Three different evaluations were performed. First, fecal NIRS and NutbalPro derived BCS was projected forward from each sampling as if it were a "one-time only" measurement. Second, BCS was derived from the average predicted weight change between 2 sampling dates for a given period. Third, inputs to the model were adjusted to better represent local animals and conditions. Fecal NIRS predicted diet quality varied from a minimum of approximately 5% CP and 57% DOM in winter to a maximum of approximately 11% CP and 60% DOM in summer. Diet quality correlated with observed seasonal changes and precipitation events. In evaluation 1, differences in observed versus projected BCS were not different (P > 0.1) between breed types but these values ranged from 0.1 to 1.1 BCS in Herefords and 0.0 to 0.9 in CGC. In evaluation 2, differences in observed versus projected BCS were not different (P > 0.1) between breed types but these values ranged from 0.00 to 0.46 in Hereford and 0.00 to 0.67 in CGC. In evaluation 3, the range of differences between observed and projected BCS was 0.04 to 0.28. The greatest difference in projected versus observed BCS occurred during periods of lowest diet quality. Body condition was predicted accurately enough to be useful in monitoring the nutrition of range beef cows under the conditions of this study.

  6. Chemical activity-based environmental risk analysis of the plasticizer di-ethylhexyl phthalate and its main metabolite mono-ethylhexyl phthalate.

    PubMed

    Gobas, Frank A P C; Otton, S Victoria; Tupper-Ring, Laura F; Crawford, Meara A; Clark, Kathryn E; Ikonomou, Michael G

    2017-06-01

    The present study applies a chemical activity-based approach to: 1) evaluate environmental concentrations of di-ethylhexyl phthalate (DEHP; n = 23 651) and its metabolite mono-ethylhexyl phthalate (MEHP; n = 1232) in 16 environmental media from 1174 studies in the United States, Canada, Europe, and Asia, and in vivo toxicity data from 934 studies in 20 species, as well as in vitro biological activity data from the US Environmental Protection Agency's Toxicity Forecaster and other sources; and 2) conduct a comprehensive environmental risk analysis. The results show that the mean chemical activities of DEHP and MEHP in abiotic environmental samples from locations around the globe are 0.001 and 10 -8 , respectively. This indicates that DEHP has reached on average 0.1% of saturation in the abiotic environment. The mean chemical activity of DEHP in biological samples is on average 100-fold lower than that in abiotic samples, likely because of biotransformation of DEHP in biota. Biological responses in both in vivo and in vitro tests occur at chemical activities between 0.01 to 1 for DEHP and between approximately 10 -6 and 10 -2 for MEHP, suggesting a greater potency of MEHP compared with DEHP. Chemical activities of both DEHP and MEHP in biota samples were less than those causing biological responses in the in vitro bioassays, without exception. A small fraction of chemical activities of DEHP in abiotic environmental samples (i.e., 4-8%) and none (0%) for MEHP were within the range of chemical activities associated with observed toxicological responses in the in vivo tests. The present study illustrates the chemical activity approach for conducting risk analyses. Environ Toxicol Chem 2017;36:1483-1492. © 2016 SETAC. © 2016 SETAC.

  7. Multilaboratory Validation of First Action Method 2016.04 for Determination of Four Arsenic Species in Fruit Juice by High-Performance Liquid Chromatography-Inductively Coupled Plasma-Mass Spectrometry.

    PubMed

    Kubachka, Kevin; Heitkemper, Douglas T; Conklin, Sean

    2017-07-01

    Before being designated AOAC First Action Official MethodSM 2016.04, the U.S. Food and Drug Administration's method, EAM 4.10 High Performance Liquid Chromatography-Inductively Coupled Plasma-Mass Spectrometric Determination of Four Arsenic Species in Fruit Juice, underwent both a single-laboratory validation and a multilaboratory validation (MLV) study. Three federal and five state regulatory laboratories participated in the MLV study, which is the primary focus of this manuscript. The method was validated for inorganic arsenic (iAs) measured as the sum of the two iAs species arsenite [As(III)] and arsenate [As(V)], dimethylarsinic acid (DMA), and monomethylarsonic acid (MMA) by analyses of 13 juice samples, including three apple juice, three apple juice concentrate, four grape juice, and three pear juice samples. In addition, two water Standard Reference Materials (SRMs) were analyzed. The method LODs and LOQs obtained among the eight laboratories were approximately 0.3 and 2 ng/g, respectively, for each of the analytes and were adequate for the intended purpose of the method. Each laboratory analyzed method blanks, fortified method blanks, reference materials, triplicate portions of each juice sample, and duplicate fortified juice samples (one for each matrix type) at three fortification levels. In general, repeatability and reproducibility of the method was ≤15% RSD for each species present at a concentration >LOQ. The average recovery of fortified analytes for all laboratories ranged from 98 to 104% iAs, DMA, and MMA for all four juice sample matrixes. The average iAs results for SRMs 1640a and 1643e agreed within the range of 96-98% of certified values for total arsenic.

  8. The Status of Honey Bee Health in Italy: Results from the Nationwide Bee Monitoring Network

    PubMed Central

    Bortolotti, Laura; Granato, Anna; Laurenson, Lynn; Roberts, Katherine; Gallina, Albino; Silvester, Nicholas; Medrzycki, Piotr; Renzi, Teresa; Sgolastra, Fabio; Lodesani, Marco

    2016-01-01

    In Italy a nation-wide monitoring network was established in 2009 in response to significant honey bee colony mortality reported during 2008. The network comprised of approximately 100 apiaries located across Italy. Colonies were sampled four times per year, in order to assess the health status and to collect samples for pathogen, chemical and pollen analyses. The prevalence of Nosema ceranae ranged, on average, from 47–69% in 2009 and from 30–60% in 2010, with strong seasonal variation. Virus prevalence was higher in 2010 than in 2009. The most widespread viruses were BQCV, DWV and SBV. The most frequent pesticides in all hive contents were organophosphates and pyrethroids such as coumaphos and tau-fluvalinate. Beeswax was the most frequently contaminated hive product, with 40% of samples positive and 13% having multiple residues, while 27% of bee-bread and 12% of honey bee samples were contaminated. Colony losses in 2009/10 were on average 19%, with no major differences between regions of Italy. In 2009, the presence of DWV in autumn was positively correlated with colony losses. Similarly, hive mortality was higher in BQCV infected colonies in the first and second visits of the year. In 2010, colony losses were significantly related to the presence of pesticides in honey bees during the second sampling period. Honey bee exposure to poisons in spring could have a negative impact at the colony level, contributing to increase colony mortality during the beekeeping season. In both 2009 and 2010, colony mortality rates were positively related to the percentage of agricultural land surrounding apiaries, supporting the importance of land use for honey bee health. PMID:27182604

  9. The Status of Honey Bee Health in Italy: Results from the Nationwide Bee Monitoring Network.

    PubMed

    Porrini, Claudio; Mutinelli, Franco; Bortolotti, Laura; Granato, Anna; Laurenson, Lynn; Roberts, Katherine; Gallina, Albino; Silvester, Nicholas; Medrzycki, Piotr; Renzi, Teresa; Sgolastra, Fabio; Lodesani, Marco

    2016-01-01

    In Italy a nation-wide monitoring network was established in 2009 in response to significant honey bee colony mortality reported during 2008. The network comprised of approximately 100 apiaries located across Italy. Colonies were sampled four times per year, in order to assess the health status and to collect samples for pathogen, chemical and pollen analyses. The prevalence of Nosema ceranae ranged, on average, from 47-69% in 2009 and from 30-60% in 2010, with strong seasonal variation. Virus prevalence was higher in 2010 than in 2009. The most widespread viruses were BQCV, DWV and SBV. The most frequent pesticides in all hive contents were organophosphates and pyrethroids such as coumaphos and tau-fluvalinate. Beeswax was the most frequently contaminated hive product, with 40% of samples positive and 13% having multiple residues, while 27% of bee-bread and 12% of honey bee samples were contaminated. Colony losses in 2009/10 were on average 19%, with no major differences between regions of Italy. In 2009, the presence of DWV in autumn was positively correlated with colony losses. Similarly, hive mortality was higher in BQCV infected colonies in the first and second visits of the year. In 2010, colony losses were significantly related to the presence of pesticides in honey bees during the second sampling period. Honey bee exposure to poisons in spring could have a negative impact at the colony level, contributing to increase colony mortality during the beekeeping season. In both 2009 and 2010, colony mortality rates were positively related to the percentage of agricultural land surrounding apiaries, supporting the importance of land use for honey bee health.

  10. Methods to assess carbonaceous aerosol sampling artifacts for IMPROVE and other long-term networks.

    PubMed

    Watson, John G; Chow, Judith C; Chen, L W Antony; Frank, Neil H

    2009-08-01

    Volatile organic compounds (VOCs) and semi-volatile organic compounds (SVOCs) adsorb to quartz fiber filters during fine and coarse particulate matter (PM2.5 and PM10, respectively) sampling for thermal/optical carbon analysis that measures organic carbon (OC) and elemental carbon (EC). Particulate SVOCs can evaporate after collection, with a small portion adsorbed within the filter. Adsorbed organic gases are measured as particulate OC, so passive field blanks, backup filters, prefilter organic denuders, and regression methods have been applied to compensate for positive OC artifacts in several long-term chemical speciation networks. Average backup filter OC levels from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network were approximately 19% higher than field blank values. This difference is within the standard deviation of the average and likely results from low SVOC concentrations in the rural to remote environments of most IMPROVE sites. Backup filters from an urban (Fort Meade, MD) site showed twice the OC levels of field blanks. Sectioning backup filters from top to bottom showed nonuniform OC densities within the filter, contrary to the assumption that VOCs and SVOCs on a backup filter equal those on the front filter. This nonuniformity may be partially explained by evaporation and readsorption of vapors in different parts of the front and backup quartz fiber filter owing to temperature, relative humidity, and ambient concentration changes throughout a 24-hr sample duration. OC-PM2.5 regression analysis and organic denuder approaches demonstrate negative sampling artifact from both Teflon membrane and quartz fiber filters.

  11. Isotope hydrology of the Chalk River Laboratories site, Ontario, Canada

    USGS Publications Warehouse

    Peterman, Zell; Neymark, Leonid; King-Sharp, K.J.; Gascoyne, Mel

    2016-01-01

    This paper presents results of hydrochemical and isotopic analyses of groundwater (fracture water) and porewater, and physical property and water content measurements of bedrock core at the Chalk River Laboratories (CRL) site in Ontario. Density and water contents were determined and water-loss porosity values were calculated for core samples. Average and standard deviations of density and water-loss porosity of 50 core samples from four boreholes are 2.73 ± 12 g/cc and 1.32 ± 1.24 percent. Respective median values are 2.68 and 0.83 indicating a positive skewness in the distributions. Groundwater samples from four deep boreholes were analyzed for strontium (87Sr/86Sr) and uranium (234U/238U) isotope ratios. Oxygen and hydrogen isotope analyses and selected solute concentrations determined by CRL are included for comparison. Groundwater from borehole CRG-1 in a zone between approximately +60 and −240 m elevation is relatively depleted in δ18O and δ2H perhaps reflecting a slug of water recharged during colder climatic conditions. Porewater was extracted from core samples by centrifugation and analyzed for major dissolved ions and for strontium and uranium isotopes. On average, the extracted water contains 15 times larger concentration of solutes than the groundwater. 234U/238U and correlation of 87Sr/86Sr with Rb/Sr values indicate that the porewater may be substantially older than the groundwater. Results of this study show that the Precambrian gneisses at Chalk River are similar in physical properties and hydrochemical aspects to crystalline rocks being considered for the construction of nuclear waste repositories in other regions.

  12. KDG218, a nearby ultra-diffuse galaxy

    NASA Astrophysics Data System (ADS)

    Karachentsev, I. D.; Makarova, L. N.; Sharina, M. E.; Karachentseva, V. E.

    2017-10-01

    We present properties of the low-surface-brightness galaxy KDG218 observed with the HST/ACS. The galaxy has a half-light (effective) diameter of a e = 47″ and a central surface brightness of SB V (0) = 24.m4/□″. The galaxy remains unresolved with the HST/ACS, which implies its distance of D > 13.1 Mpc and linear effective diameter of A e > 3.0 kpc. We notice that KDG218 is most likely associated with a galaxy group around the massive lenticular NGC4958 galaxy at approximately 22 Mpc, or with the Virgo Southern Extension filament at approximately 16.5 Mpc. At these distances, the galaxy is classified as an ultra-diffuse galaxy (UDG) similar to those found in the Virgo, Fornax, and Coma clusters. We also present a sample of 15 UDG candidates in the Local Volume. These sample galaxies have the following mean parameters: 〈 D〉 = 5.1 Mpc, 〈 A e 〉 = 4.8 kpc, and 〈 SB B ( e)〉 = 27.m4/□″. All the local UDG candidates reside near massive galaxies located in the regions with the mean stellar mass density (within 1 Mpc) about 50 times greater than the average cosmic density. The local fraction of UDGs does not exceed 1.5% of the Local Volume population. We notice that the presented sample of local UDGs is a heterogeneous one containing irregular, transition, and tidal types, as well as objects consisting of an old stellar population.

  13. Average Emissivity Curve of Batse Gamma-Ray Bursts with Different Intensities

    NASA Technical Reports Server (NTRS)

    Mitrofanov, Igor G.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles A.

    1999-01-01

    Six intensity groups with approximately 150 BATSE gamma-ray bursts each are compared using average emissivity curves. Time stretch factors for each of the dimmer groups are estimated with respect to the brightest group, which serves as the reference, taking into account the systematics of counts-produced noise effects and choice statistics. A stretching/intensity anticorrelation is found with good statistical significance during the average back slopes of bursts. A stretch factor approximately 2 is found between the 150 dimmest bursts, with peak flux less than 0.45 photons/sq cm.s, and the 147 brightest bursts, with peak flux greater than 4.1 photons/sq cm.s. On the other hand, while a trend of increasing stretching factor may exist for rise fronts for bursts with decreasing peak flux from greater than 4.1 photons/sq cm.s down to 0.7 photons/sq cm.s, the magnitude of the stretching factor is less than approximately 1.4 and is therefore inconsistent with stretching factor of back slope.

  14. Approximation to cutoffs of higher modes of Rayleigh waves for a layered earth model

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2009-01-01

    A cutoff defines the long-period termination of a Rayleigh-wave higher mode and, therefore is a key characteristic of higher mode energy relationship to several material properties of the subsurface. Cutoffs have been used to estimate the shear-wave velocity of an underlying half space of a layered earth model. In this study, we describe a method that replaces the multilayer earth model with a single surface layer overlying the half-space model, accomplished by harmonic averaging of velocities and arithmetic averaging of densities. Using numerical comparisons with theoretical models validates the single-layer approximation. Accuracy of this single-layer approximation is best defined by values of the calculated error in the frequency and phase velocity estimate at a cutoff. Our proposed method is intuitively explained using ray theory. Numerical results indicate that a cutoffs frequency is controlled by the averaged elastic properties within the passing depth of Rayleigh waves and the shear-wave velocity of the underlying half space. ?? Birkh??user Verlag, Basel 2009.

  15. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Detecting spatial patterns of rivermouth processes using a geostatistical framework for near-real-time analysis

    USGS Publications Warehouse

    Xu, Wenzhao; Collingsworth, Paris D.; Bailey, Barbara; Carlson Mazur, Martha L.; Schaeffer, Jeff; Minsker, Barbara

    2017-01-01

    This paper proposes a geospatial analysis framework and software to interpret water-quality sampling data from towed undulating vehicles in near-real time. The framework includes data quality assurance and quality control processes, automated kriging interpolation along undulating paths, and local hotspot and cluster analyses. These methods are implemented in an interactive Web application developed using the Shiny package in the R programming environment to support near-real time analysis along with 2- and 3-D visualizations. The approach is demonstrated using historical sampling data from an undulating vehicle deployed at three rivermouth sites in Lake Michigan during 2011. The normalized root-mean-square error (NRMSE) of the interpolation averages approximately 10% in 3-fold cross validation. The results show that the framework can be used to track river plume dynamics and provide insights on mixing, which could be related to wind and seiche events.

  17. Light propagation in tissues with controlled optical properties

    NASA Astrophysics Data System (ADS)

    Tuchin, Valery V.; Maksimova, Irina L.; Zimnyakov, Dmitry A.; Kon, Irina L.; Mavlyutov, Albert H.; Mishin, Alexey A.

    1997-10-01

    Theoretical and computer modeling approaches, such as Mie theory, radiative transfer theory, diffusion wave correlation spectroscopy, and Monte Carlo simulation were used to analyze tissue optics during a process of optical clearing due to refractive index matching. Continuous wave transmittance and forward scattering measurement as well as intensity correlation experiments were used to monitor tissue structural and optical properties. As a control, tissue samples of the human sclera were taken. Osmotically active solutions, such as Trazograph, glucose, and polyethylene glycol, were used as chemicals. A characteristic time response of human scleral optical clearing the range 3 to 10 min was determined. The diffusion coefficients describing the permeability of the scleral samples to Trazograph were experimentally estimated; the average value was DT approximately equals (0.9 +/- 0.5) X 10-5 cm2/s. The results are general and can be used to describe many other fibrous tissues.

  18. Determination of the average orientation of DNA in the octopus sperm [ital Eledone] [ital cirrhossa] through polarized light scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, D.B.; Maestre, M.F.; McClain, W.M.

    1994-08-20

    The coupled-dipole approximation has been used to model polarized light-scattering data obtained from the sperm of the octopus [ital Eledone] [ital cirrhosa]. Mueller scattering-matrix elements (which describe how a sample alters the intensity and degree of polarization of scattered light) were measured as a function of angle. The sample was modeled as a helical fiber believed to correspond to a DNA protein complex. It was necessary to propose an inherent anisotropy in the polarizability of the fiber in order to fit the data. The direction of the principle axes of the polarizability were determined by comparing the model with experimentalmore » data. The results suggest that the 2-nm DNA fibers are perpendicular to the thick fiber that defines the helical geometry of the octopus sperm head.« less

  19. Coherent backscattering of light by an inhomogeneous cloud of cold atoms

    NASA Astrophysics Data System (ADS)

    Labeyrie, Guillaume; Delande, Dominique; Müller, Cord A.; Miniatura, Christian; Kaiser, Robin

    2003-03-01

    When a quasiresonant laser beam illuminates an optically thick cloud of laser-cooled rubidium atoms, the average diffuse intensity reflected off the sample is enhanced in a narrow angular range around the direction of exact backscattering. This phenomenon is known as coherent backscattering (CBS). By detuning the laser from resonance, we are able to modify the light scattering mean-free path inside the sample and we record accordingly the variations of the CBS cone shape. We then compare the experimental data with theoretical calculations and Monte Carlo simulations including the effect of the light polarization and of the internal structure of the atoms. We confirm that the internal structure strongly affects the enhancement factor of the cone and we show that the unusual shape of the atomic medium—approximately a spherically-symmetric, Gaussian density profile—strongly affects the width and shape of the cone.

  20. Corona Discharge Suppression in Negative Ion Mode Nanoelectrospray Ionization via Trifluoroethanol Addition.

    PubMed

    McClory, Phillip J; Håkansson, Kristina

    2017-10-03

    Negative ion mode nanoelectrospray ionization (nESI) is often utilized to analyze acidic compounds, from small molecules to proteins, with mass spectrometry (MS). Under high aqueous solvent conditions, corona discharge is commonly observed at emitter tips, resulting in low ion abundances and reduced nESI needle lifetimes. We have successfully reduced corona discharge in negative ion mode by trace addition of trifluoroethanol (TFE) to aqueous samples. The addition of as little as 0.2% TFE increases aqueous spray stability not only in nESI direct infusion, but also in nanoflow liquid chromatography (nLC)/MS experiments. Negative ion mode spray stability with 0.2% TFE is approximately 6× higher than for strictly aqueous samples. Upon addition of 0.2% TFE to the mobile phase of nLC/MS experiments, tryptic peptide identifications increased from 93 to 111 peptides, resulting in an average protein sequence coverage increase of 18%.

  1. Predictive factors of excessive online poker playing.

    PubMed

    Hopley, Anthony A B; Nicki, Richard M

    2010-08-01

    Despite the widespread rise of online poker playing, there is a paucity of research examining potential predictors for excessive poker playing. The aim of this study was to build on recent research examining motives for Texas Hold'em play in students by determining whether predictors of other kinds of excessive gambling apply to Texas Hold'em. Impulsivity, negative mood states, dissociation, and boredom proneness have been linked to general problem gambling and may play a role in online poker. Participants of this study were self-selected online poker players (N = 179) who completed an online survey. Results revealed that participants played an average of 20 hours of online poker a week and approximately 9% of the sample was classified as a problem gambler according to the Canadian Problem Gambling Index. Problem gambling, in this sample, was uniquely predicted by time played, dissociation, boredom proneness, impulsivity, and negative affective states, namely depression, anxiety, and stress.

  2. 77 FR 22615 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-16

    ... approximately 209 broker-dealers will spend an average of 87 hours annually to comply with this rule. Thus, the total compliance burden is approximately 18,200 burden-hours per year. Rule 15g-4 contains record...

  3. Performance testing of NIOSH Method 5524/ASTM Method D-7049-04, for determination of metalworking fluids.

    PubMed

    Glaser, Robert; Kurimo, Robert; Shulman, Stanley

    2007-08-01

    A performance test of NIOSH Method 5524/ASTM Method D-7049-04 for analysis of metalworking fluids (MWF) was conducted. These methods involve determination of the total and extractable weights of MWF samples; extractions are performed using a ternary blend of toluene:dichloromethane:methanol and a binary blend of methanol:water. Six laboratories participated in this study. A preliminary analysis of 20 blank samples was made to familiarize the laboratories with the procedure(s) and to estimate the methods' limits of detection/quantitation (LODs/LOQs). Synthetically generated samples of a semisynthetic MWF aerosol were then collected on tared polytetrafluoroethylene (PTFE) filters and analyzed according to the methods by all participants. Sample masses deposited (approximately 400-500 micro g) corresponded to amounts expected in an 8-hr shift at the NIOSH recommended exposure levels (REL) of 0.4 mg/m(3) (thoracic) and 0.5 mg/m(3) (total particulate). The generator output was monitored with a calibrated laser particle counter. One laboratory significantly underreported the sampled masses relative to the other five labs. A follow-up study compared only gravimetric results of this laboratory with those of two other labs. In the preliminary analysis of blanks; the average LOQs were 0.094 mg for the total weight analysis and 0.136 mg for the extracted weight analyses. For the six-lab study, the average LOQs were 0.064 mg for the total weight analyses and 0.067 mg for the extracted weight analyses. Using ASTM conventions, h and k statistics were computed to determine the degree of consistency of each laboratory with the others. One laboratory experienced problems with precision but not bias. The precision estimates for the remaining five labs were not different statistically (alpha = 0.005) for either the total or extractable weights. For all six labs, the average fraction extracted was > or =0.94 (CV = 0.025). Pooled estimates of the total coefficients of variation of analysis were 0.13 for the total weight samples and 0.13 for the extracted weight samples. An overall method bias of -5% was determined by comparing the overall mean concentration reported by the participants to that determined by the particle counter. In the three-lab follow-up study, the nonconsistent lab reported results that were unbiased but statistically less precise than the others; the average LOQ was 0.133 mg for the total weight analyses. It is concluded that aerosolized MWF sampled at concentrations corresponding to either of the NIOSH RELs can generally be shipped unrefrigerated, stored refrigerated up to 7 days, and then analyzed quantitatively and precisely for MWF using the NIOSH/ASTM procedures.

  4. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  5. Combined target factor analysis and Bayesian soft-classification of interference-contaminated samples: forensic fire debris analysis.

    PubMed

    Williams, Mary R; Sigman, Michael E; Lewis, Jennifer; Pitan, Kelly McHugh

    2012-10-10

    A bayesian soft classification method combined with target factor analysis (TFA) is described and tested for the analysis of fire debris data. The method relies on analysis of the average mass spectrum across the chromatographic profile (i.e., the total ion spectrum, TIS) from multiple samples taken from a single fire scene. A library of TIS from reference ignitable liquids with assigned ASTM classification is used as the target factors in TFA. The class-conditional distributions of correlations between the target and predicted factors for each ASTM class are represented by kernel functions and analyzed by bayesian decision theory. The soft classification approach assists in assessing the probability that ignitable liquid residue from a specific ASTM E1618 class, is present in a set of samples from a single fire scene, even in the presence of unspecified background contributions from pyrolysis products. The method is demonstrated with sample data sets and then tested on laboratory-scale burn data and large-scale field test burns. The overall performance achieved in laboratory and field test of the method is approximately 80% correct classification of fire debris samples. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Decoherence and surface hopping: When can averaging over initial conditions help capture the effects of wave packet separation?

    NASA Astrophysics Data System (ADS)

    Subotnik, Joseph E.; Shenvi, Neil

    2011-06-01

    Fewest-switches surface hopping (FSSH) is a popular nonadiabatic dynamics method which treats nuclei with classical mechanics and electrons with quantum mechanics. In order to simulate the motion of a wave packet as accurately as possible, standard FSSH requires a stochastic sampling of the trajectories over a distribution of initial conditions corresponding, e.g., to the Wigner distribution of the initial quantum wave packet. Although it is well-known that FSSH does not properly account for decoherence effects, there is some confusion in the literature about whether or not this averaging over a distribution of initial conditions can approximate some of the effects of decoherence. In this paper, we not only show that averaging over initial conditions does not generally account for decoherence, but also why it fails to do so. We also show how an apparent improvement in accuracy can be obtained for a fortuitous choice of model problems, even though this improvement is not possible, in general. For a basic set of one-dimensional and two-dimensional examples, we find significantly improved results using our recently introduced augmented FSSH algorithm.

  7. Mechanical properties of reconstituted Australian black coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasinge, D.; Ranjith, P.G.; Choi, S.K.

    2009-07-15

    Coal is usually highly heterogeneous. Great variation in properties can exist among samples obtained even at close proximity within the same seam or within the same core sample. This makes it difficult to establish a correlation between uniaxial compressive strength (UCS) and point load index for coal. To overcome this problem, a method for making reconstituted samples for laboratory tests was developed. Samples were made by compacting particles of crushed coal mixed with cement and water. These samples were allowed to cure for four days. UCS and point load tests were performed to measure the geomechanical properties of the reconstitutedmore » coal. After four days curing, the average UCS was found to be approximately 4 MPa. This technical note outlines some experimental results and correlations that were developed to predict the mechanical properties of the reconstituted black coal samples. By reconstituting the samples from crushed coal, it is hoped that the samples will retain the important mechanical and physicochemical properties of coal, including the swelling, fluid transport, and gas sorption properties of coal. The aim is to be able to produce samples that are homogeneous with properties that are highly reproducible, and the reconstituted coal samples can be used for a number of research areas related to coal, including the long-term safe storage of CO{sub 2} in coal seams.« less

  8. An interplanetary magnetic field ensemble at 1 AU

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Goldstein, M. L.; King, J. H.

    1985-01-01

    A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.

  9. The status of PhD education in economic, social, and administrative sciences between 2005 and 2008.

    PubMed

    Farley, Joel F; Wang, Chi-Chuan; Blalock, Susan J

    2010-09-10

    To describe the funding, education, enrollment, and graduation patterns from economic, social, and administrative sciences PhD programs in colleges and schools of pharmacy in the United States. Economic, social, and administrative sciences PhD programs were identified from the American Association of Colleges of Pharmacy (AACP) Web site. A 41-item online survey instrument was sent to the director of graduate studies of each identified program. Only programs offering a PhD degree were included in the study. Of the 26 programs surveyed, 20 (77%) provided useable responses to the survey instrument. Approximately 91% of PhD programs guarantee funding to incoming students with an average commitment of 2.9 years. On average, students were paid a stipend of $18,000 per year for commitments to research and teaching assistantships, each averaging approximately 2 years in length. Programs admitted an average of 3.5 students per year and graduated approximately 85% of entering students. The majority of students are non-US citizens and accept positions in either academic or industrial positions after graduation. Most economic, social, and administrative sciences PhD programs guarantee funding to incoming PhD candidates. Programs offering funding packages significantly below the average may be at a competitive disadvantage. It is unclear whether the number of students graduating from PhD programs is adequate to fulfill academic and industrial needs.

  10. [Mercury pollution in cricket in different biotopes suffering from pollution by zinc smelting].

    PubMed

    Zheng, Dong-Mei; Li, Xin-Xin; Luo, Qing

    2012-10-01

    Total mercury contents in cricket bodies were studied in different biotopes in the surrounding of Huludao Zinc Plant to discuss the mercury distribution characteristics in cricket and to reveal the effects of environmental mercury accumulation in the short life-cycle insects through comparing cricket with other insect species. The average mercury content in cricket was 0.081 mg x kg(-1) and much higher than those in the control sites (0.012 mg x kg(-1) in average) in different biotopes. Mercury contents were found in the order of cricket head > wing > thorax approximately abdomen > leg. Mercury contents in cricket bodies varied greatly with sample sites. Significant correlation was found between the mercury contents in cricket and the distance from the pollution source as well as the mercury contents in plant stems. No significant correlation was found between the mercury contents in soil and in cricket bodies. Mercury contents in cricket were lower than those in cicadae, similar to those in other insects with shorter life-cycle periods.

  11. First Neutrino Point-Source Results from the 22 String Icecube Detector

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Heise, J.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Lennarz, D.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Middell, E.; Milke, N.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Voigt, B.; Walck, C.; Waldenmaier, T.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Ice Cube Collaboration

    2009-08-01

    We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2σ after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E -2 spectrum is E^{2} Φ_{ν_{μ}} < 1.4 × 10^{-11} TeV cm^{-2} s^{-1}, in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2.

  12. Global lunar crust - Electrical conductivity and thermoelectric origin of remanent magnetism

    NASA Technical Reports Server (NTRS)

    Dyal, P.; Parkin, C. W.; Daily, W. D.

    1977-01-01

    An upper limit is placed on the average crustal conductivity from an investigation of toroidal (V x B) induction in the moon, using ten-minute data intervals of simultaneous lunar orbiting and surface magnetometer data. Crustal conductivity is determined as a function of crust thickness. For an average global crust thickness of about 80 km, the crust surface electrical conductivity is of the order of 1 hundred millionth mho/m. The toroidal-induction results lower the surface-conductivity limit obtained from poloidal-induction results by approximately four orders of magnitude. In addition, a thermoelectric (Seebeck effect) generator model is presented as a magnetic-field source for thermoremanent magnetization of the lunar crust during its solidification and cooling. Magnetic fields from 1000 to 10,000 gammas are calculated for various crater and crustal geometries. Solidified crustal material cooling through the iron Curie temperature in the presence of such ancient lunar fields could have received thermoremanent magnetization consistent with that measured in most returned lunar samples.

  13. Elemental composition of solar energetic particles. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cook, W. R., III

    1981-01-01

    The Low Energy Telescopes on the Voyager spacecraft are used to measure the elemental composition (2 or = Z or = 28) and energy spectra (5 to 15 MeV/nucleon) of solar energetic particles (SEPs) in seven large flare events. Four flare events are selected which have SEP abundance ratios approximately independent of energy/nucleon. The abundances for these events are compared from flare to flare and are compared to solar abundances from other sources: spectroscopy of the photosphere and corona, and solar wind measurements. The four flare average SEP composition is significantly different from the solar composition determined by photospheric spectroscopy. The average SEP composition is in agreement with solar wind abundance results and with a number of recent coronal abundance measurements. The evidence for a common depletion of oxygen in SEPs, the corona and the solar wind relative to the photosphere suggest that the SEPs originate in the corona and that both the SEPs and solar wind sample a coronal composition which is significantly and persistently different from that of the photosphere.

  14. Average size of random polygons with fixed knot topology.

    PubMed

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  15. The effect of selenium on spoil suitability as root zone material at Navajo Mine, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lane, J.R.; Buchanan, B.A.; Ramsey, T.C.

    1995-09-01

    The root zone suitability limits for spoil Se at Navajo Mine in northwest New Mexico are currently 0.8 ppm total Se and 0.15 ppm hot-water soluble Se. These criteria were largely developed by the Office of Surface Mining using data from the Northern Great Plains. Applying these values, approximately 23% of the spoil volume and 47% of the spoil area sampled at Navajo Mine from 1985 to December 1993 were determined to be unsuitable as root zone material. Secondary Se accumulator plants (Atriplex canescens) growing in both undisturbed and reclaimed areas were randomly sampled for selenium from 1985 to Decembermore » 1993. In most cases the undisturbed soil and reclaimed spoil at these plant sampling sites were sampled for both total and hot-water soluble Se. Selenium values for Atriplex canescens samples collected on the undisturbed sites averaged 0.64 ppm and ranged from 0.20 ppm to 2.5 ppm. Selenium values for the plants growing on spoil ranged from 0.02 ppm to 7.75 ppm and averaged 1.07 ppm. Total and hot-water Se values for spoil averaged 0.66 ppm and 0.06 ppm respectively, and ranged from 0.0 to 14.2 for total Se and 0.0 ppm to 0.72 ppm for hot-water soluble Se. The plant Se values were poorly correlated to both total and hot-water soluble Se values for both soil and spoil. Therefore, predicting suitable guidelines using normal regression techniques was ineffective. Based on background Se levels in native soils, and levels found on reclaimed areas with Atriplex canescens, it is suggested that a total Se level of 2.0 ppm and a hot-water soluble Se level of 0.25 ppm should be used to represent the suitability limits for Se at Navajo Mine. If these Se values are used, it is estimated that less than 1% of the spoil volume would be unsuitable. This volume of spoil seems to be a more accurate estimate of the amount of spoil with unsuitable levels of Se than the estimated 23% using the current guidelines.« less

  16. Radioactivity concentrations in sediments on the coast of the Iranian province of Khuzestan in the Northern Persian Gulf.

    PubMed

    Pourahmad, Jalal; Motallebi, Abbasali; Asgharizadeh, Farid; Eskandari, Gholam Reza; Shafaghi, Bijan

    2008-10-01

    Gamma-ray spectrometric analyses were performed on sediment samples from the coast of Khuzestan province (south west of Iran, neighbor to Iraq and Kuwait) to study the concentration of natural as well as man-made radioactive sources. The coast of Khuzestan, which extends for approximately 400 km is mainly soft areas of mud flats within different ecosystems including river mouth, estuaries, creeps, and small bays. Suspended material from the Iranian rivers including Arvand (Karun), Bahmanshir, Jarrahi, and Zohreh has settled to form these extensive soft areas. Eighty three samples were taken at different points along the coast in undisturbed areas at intervals of about 5 km since Fall 2005 to Winter 2006. Collection was carried out during low-tide, where it was possible to collect sediments from the wet region that was covered by sea water during the high tide. At each of the sample sites, a sampling area of about 1 m(2) was considered. All samples were of a muddy nature, and were left to dry in open air before drying in the oven at 105 degrees C for 2-3 days to remove all water content. The average activity concentration of the radionuclides (226)Ra (30 Bq/Kg), (232)Th (11 Bq/kg), (238)U (18 Bq/kg), and (137)Cs (2.6 Bq/kg) along the shore of Khuzestan reaches are much less than the values commonly assigned as the world average. Nevertheless in case of (40)K which is a long lived naturally occurring radionuclide, the result (481 Bq/kg) was higher than the world average which could be due to a large Kuwaiti oil spill and also fallout and deposition of tremendous amount of fly ashes which resulted from ignited Kuwaiti oil fields during the 2nd Persian Gulf war (1990-91). For man-made (137)Cs and naturally occurring (232)Th, the western and eastern parts of Khuzestan shore showed higher concentrations than the middle part (Khooriat or creeps). For the long lived naturally occurring radionuclide (40)K and Gulf war (238)U (anti armor shells), there were no significant differences (P < 0.05) among the three regions.

  17. General trends of dihedral conformational transitions in a globular protein.

    PubMed

    Miao, Yinglong; Baudry, Jerome; Smith, Jeremy C; McCammon, J Andrew

    2016-04-01

    Dihedral conformational transitions are analyzed systematically in a model globular protein, cytochrome P450cam, to examine their structural and chemical dependences through combined conventional molecular dynamics (cMD), accelerated molecular dynamics (aMD) and adaptive biasing force (ABF) simulations. The aMD simulations are performed at two acceleration levels, using dihedral and dual boost, respectively. In comparison with cMD, aMD samples protein dihedral transitions approximately two times faster on average using dihedral boost, and ∼ 3.5 times faster using dual boost. In the protein backbone, significantly higher dihedral transition rates are observed in the bend, coil, and turn flexible regions, followed by the β bridge and β sheet, and then the helices. Moreover, protein side chains of greater length exhibit higher transition rates on average in the aMD-enhanced sampling. Side chains of the same length (particularly Nχ = 2) exhibit decreasing transition rates with residues when going from hydrophobic to polar, then charged and aromatic chemical types. The reduction of dihedral transition rates is found to be correlated with increasing energy barriers as identified through ABF free energy calculations. These general trends of dihedral conformational transitions provide important insights into the hierarchical dynamics and complex free energy landscapes of functional proteins. © 2016 Wiley Periodicals, Inc.

  18. An approach for sample size determination of average bioequivalence based on interval estimation.

    PubMed

    Chiang, Chieh; Hsiao, Chin-Fu

    2017-03-30

    In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. General trends of dihedral conformational transitions in a globular protein

    DOE PAGES

    Miao, Yinglong; Baudry, Jerome; Smith, Jeremy C.; ...

    2016-02-15

    In this paper, dihedral conformational transitions are analyzed systematically in a model globular protein, cytochrome P450cam, to examine their structural and chemical dependences through combined conventional molecular dynamics (cMD), accelerated molecular dynamics (aMD) and adaptive biasing force (ABF) simulations. The aMD simulations are performed at two acceleration levels, using dihedral and dual boost, respectively. In comparison with cMD, aMD samples protein dihedral transitions approximately two times faster on average using dihedral boost, and ~3.5 times faster using dual boost. In the protein backbone, significantly higher dihedral transition rates are observed in the bend, coil, and turn flexible regions, followed bymore » the β bridge and β sheet, and then the helices. Moreover, protein side chains of greater length exhibit higher transition rates on average in the aMD-enhanced sampling. Side chains of the same length (particularly Nχ = 2) exhibit decreasing transition rates with residues when going from hydrophobic to polar, then charged and aromatic chemical types. The reduction of dihedral transition rates is found to be correlated with increasing energy barriers as identified through ABF free energy calculations. In conclusion, these general trends of dihedral conformational transitions provide important insights into the hierarchical dynamics and complex free energy landscapes of functional proteins.« less

  20. Typical and Unusual Properties of Magnetic Clouds during the WIND Era

    NASA Technical Reports Server (NTRS)

    Lepping, R. P.; Berdichevsky, D.; Szabo, A.; Burlaga, L. F.; Thompson, B. J.; Mariani, F.; Lazarus, A. J.; Steinberg, J. T.

    1999-01-01

    A list of 33 magnetic clouds as identified in WIND magnetic field and plasma data has been compiled. The intervals for these events are provided as part of NASA/GSFC, WIND-MFI's Website under the URL http://lepmfi.qsfc.nasa.gov/mfi/mag_cloud publ.html#table The period covered in this study is from early 1995 to November 1998 which primarily occurs in the quiet part of the solar cycle. A force free, cylindrically symmetric, magnetic field model has been applied to the field data in 1-hour averaged form for all of these events (except one small event where 10 min avg's were used) and the resulting fit-parameters examined. Each event was provided a semi-quantitatively determined quality factor (excellent, good or poor). A set of 28 good or better cases, spanning a surprisingly large range of values for its various properties, was used for further analysis. These properties are, for example, durations, attitudes, sizes, asymmetries, axial field strengths, speeds, and relative impact parameters. They will be displayed and analyzed, along with some related derived quantities, with emphasis on typical vs unusual properties and on the magnetic fields magnetic clouds' relationships to the Sun and to upstream interplanetary shocks, where possible. For example, it is remarkable how narrowly distributed the speeds of these clouds are, and the overall average speed (390 techniques km/s) is less than that normally quoted for the average solar wind speed (420 km/s) despite the fact that many of these clouds are d"drivers" of interplanetary shocks. On average, a cloud appears to be a little less symmetric when the spacecraft is able to pass close to the cloud's axis as compared to a farther out passage. The average longitude and latitude (in GSE) of the axes of the clouds are 85 degrees and 8 degrees, respectively, with standard deviations near 40 degrees. Also, the half=yearly averaged axial magnetic flux has approximately tripled. almost monotonically, from about 6 to 17 X 10(exp 29) Mx over the first 3.5 years of consideration, but with a large uncertainty on each of the half-year estimates, because of small sampling. If true,this finding implies an approximate tripling of the events' solar fluxes over this period as it goes into solar maximum.

  1. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetricmore » average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.« less

  2. Airborne exposures associated with the typical use of an aerosol brake cleaner during vehicle repair work.

    PubMed

    Fries, Michael; Williams, Pamela R D; Ovesen, Jerald; Maier, Andrew

    2018-04-19

    Many petroleum-based products are used for degreasing and cleaning purposes during vehicle maintenance and repairs. Although prior studies have evaluated chemical exposures associated with this type of work, most of these have focused on gasoline and exhaust emissions, with few samples collected solely during the use of an aerosol cleaning product. In this case study, we assess the type of airborne exposures that would be expected from the typical use of an aerosol brake cleaner during vehicle repair work. Eight exposure scenarios were evaluated over a two-day study in which the benzene content of the brake cleaner and potential for dilution ventilation and air flow varied. Both short-term (15 min) and task-based (≥1 hr) charcoal tube samples were collected in the breathing zone and adjacent work area and analyzed for total hydrocarbons (THCs), toluene, and benzene. The majority of personal (N = 48) and area (N = 47) samples had detectable levels of THC and toluene, but no detections of benzene were found. For the personal short-term samples, average airborne concentrations ranged from 3.1 - 61.5 ppm (13.8-217.5 mg/m 3 ) for THC and 2.2 - 44.0 ppm (8.2-162.5 mg/m 3 ) for toluene, depending on the scenario. Compared to the personal short-term samples, average concentrations were generally 2 to 3 times lower for the personal task-based samples and 2 to 5 times lower for the area short-term samples. The highest exposures occurred when the garage bay doors were closed, floor fan was turned off, or greatest amount of brake cleaner was used. These findings add to the limited dataset on this topic and can be used to bound or approximate worker or consumer exposures from use of aerosol cleaning products with similar compositions and use patterns.

  3. Seasonal pattern of anthropogenic salinization in temperate forested headwater streams.

    PubMed

    Timpano, Anthony J; Zipper, Carl E; Soucek, David J; Schoenholtz, Stephen H

    2018-04-15

    Salinization of freshwaters by human activities is of growing concern globally. Consequences of salt pollution include adverse effects to aquatic biodiversity, ecosystem function, human health, and ecosystem services. In headwater streams of the temperate forests of eastern USA, elevated specific conductance (SC), a surrogate measurement for the major dissolved ions composing salinity, has been linked to decreased diversity of aquatic insects. However, such linkages have typically been based on limited numbers of SC measurements that do not quantify intra-annual variation. Effective management of salinization requires tools to accurately monitor and predict salinity while accounting for temporal variability. Toward that end, high-frequency SC data were collected within the central Appalachian coalfield over 4 years at 25 forested headwater streams spanning a gradient of salinity. A sinusoidal periodic function was used to model the annual cycle of SC, averaged across years and streams. The resultant model revealed that, on average, salinity deviated approximately ±20% from annual mean levels across all years and streams, with minimum SC occurring in late winter and peak SC occurring in late summer. The pattern was evident in headwater streams influenced by surface coal mining, unmined headwater reference streams with low salinity, and larger-order salinized rivers draining the study area. The pattern was strongly responsive to varying seasonal dilution as driven by catchment evapotranspiration, an effect that was amplified slightly in unmined catchments with greater relative forest cover. Evaluation of alternative sampling intervals indicated that discrete sampling can approximate the model performance afforded by high-frequency data but model error increases rapidly as discrete sampling intervals exceed 30 days. This study demonstrates that intra-annual variation of salinity in temperate forested headwater streams of Appalachia USA follows a natural seasonal pattern, driven by interactive influences on water quantity and quality of climate, geology, and terrestrial vegetation. Because climatic and vegetation dynamics vary annually in a seasonal, cyclic manner, a periodic function can be used to fit a sinusoidal model to the salinity pattern. The model framework used here is broadly applicable in systems with streamflow-dependent chronic salinity stress. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Atmospheric CO2 From Flask Air Samples at 10 Sites in the Scripps Institution of Oceanography (SIO) Air Sampling Network (1957 - 2001) (issued 2004)

    DOE Data Explorer

    Keeling, Charles D. [Univ. of California, San Diego, CA (United States). Scripps Inst. of Oceanography; Whorf, Timothy P. [Univ. of California, San Diego, CA (United States). Scripps Inst. of Oceanography; Blasing, T. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Jones, Sonja [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA)

    2004-09-01

    The Carbon Dioxide Research Group, Scripps Institution of Oceanography, University of California, San Diego, has provided this data set, which includes long-term measurements of near-surface atmospheric CO2 concentrations at 10 locations spanning latitudes 82°N to 90°S. Most of the data are based on replicated (collected at the same time and place) flask samples taken at intervals of approximately one week to one month and subsequently subjected to infrared analysis. Periods of record begin in various years, ranging from 1957 (for the South Pole station) to 1985 (for Alert, Canada), and all flask data records except for Christmas Island and Baring Head, New Zealand extend through year 2001. Christmas Island data end with August, 2001 and Baring Head data end with October 2001. Weekly averages of continuous data from Mauna Loa Observatory, Hawaii, are available back to March 1958. Similar weekly averages are also available for La Jolla, California, from November 1972 to October 1975, and for the South Pole from June 1960 to October 1963. These long-term records of atmospheric CO2 concentration complement the continuous records made by SIO, and also complement the long term flask records of the Climate Monitoring and Diagnostics Laboratory of the National Oceanic and Atmospheric Administration. All these data are useful for characterizing seasonal and geographical variations in atmospheric CO2 over several years, and for assessing results of global carbon models.

  5. Implementation of Pilot Protection System for Large Scale Distribution System like The Future Renewable Electric Energy Distribution Management Project

    NASA Astrophysics Data System (ADS)

    Iigaya, Kiyohito

    A robust, fast and accurate protection system based on pilot protection concept was developed previously and a few alterations in that algorithm were made to make it faster and more reliable and then was applied to smart distribution grids to verify the results for it. The new 10 sample window method was adapted into the pilot protection program and its performance for the test bed system operation was tabulated. Following that the system comparison between the hardware results for the same algorithm and the simulation results were compared. The development of the dual slope percentage differential method, its comparison with the 10 sample average window pilot protection system and the effects of CT saturation on the pilot protection system are also shown in this thesis. The implementation of the 10 sample average window pilot protection system is done to multiple distribution grids like Green Hub v4.3, IEEE 34, LSSS loop and modified LSSS loop. Case studies of these multi-terminal model are presented, and the results are also shown in this thesis. The result obtained shows that the new algorithm for the previously proposed protection system successfully identifies fault on the test bed and the results for both hardware and software simulations match and the response time is approximately less than quarter of a cycle which is fast as compared to the present commercial protection system and satisfies the FREEDM system requirement.

  6. A comparison of methods for deriving solute flux rates using long-term data from streams in the mirror lake watershed

    USGS Publications Warehouse

    Bukaveckas, P.A.; Likens, G.E.; Winter, T.C.; Buso, D.C.

    1998-01-01

    Calculation of chemical flux rates for streams requires integration of continuous measurements of discharge with discrete measurements of solute concentrations. We compared two commonly used methods for interpolating chemistry data (time-averaging and flow-weighting) to determine whether discrepancies between the two methods were large relative to other sources of error in estimating flux rates. Flux rates of dissolved Si and SO42- were calculated from 10 years of data (1981-1990) for the NW inlet and Outlet of Mirror Lake and for a 40-day period (March 22 to April 30, 1993) during which we augmented our routine (weekly) chemical monitoring with collection of daily samples. The time-averaging method yielded higher estimates of solute flux during high-flow periods if no chemistry samples were collected corresponding to peak discharge. Concentration-discharge relationships should be used to interpolate stream chemistry during changing flow conditions if chemical changes are large. Caution should be used in choosing the appropriate time-scale over which data are pooled to derive the concentration-discharge regressions because the model parameters (slope and intercept) were found to be sensitive to seasonal and inter-annual variation. Both methods approximated solute flux to within 2-10% for a range of solutes that were monitored during the intensive sampling period. Our results suggest that errors arising from interpolation of stream chemistry data are small compared with other sources of error in developing watershed mass balances.

  7. Effects of air temperature and velocity on the drying kinetics and product particle size of starch from arrowroot (Maranta arundinacae)

    NASA Astrophysics Data System (ADS)

    Caparanga, Alvin R.; Reyes, Rachael Anne L.; Rivas, Reiner L.; De Vera, Flordeliza C.; Retnasamy, Vithyacharan; Aris, Hasnizah

    2017-11-01

    This study utilized the 3k factorial design with k as the two varying factors namely, temperature and air velocity. The effects of temperature and air velocity on the drying rate curves and on the average particle diameter of the arrowroot starch were investigated. Extracted arrowroot starch samples were dried based on the designed parameters until constant weight was obtained. The resulting initial moisture content of the arrowroot starch was 49.4%. Higher temperatures correspond to higher drying rates and faster drying time while air velocity effects were approximately negligible or had little effect. Drying rate is a function of temperature and time. The constant rate period was not observed for the drying rate of arrowroot starch. The drying curves were fitted against five mathematical models: Lewis, Page, Henderson and Pabis, Logarithmic and Midili. The Midili Model was the best fit for the experimental data since it yielded the highest R2 and the lowest RSME values for all runs. Scanning electron microscopy (SEM) was used for qualitative analysis and for determination of average particle diameter of the starch granules. The starch granules average particle diameter had a range of 12.06 - 24.60 μm. The use of ANOVA proved that particle diameters for each run varied significantly with each other. And, the Taguchi Design proved that high temperatures yield lower average particle diameter, while high air velocities yield higher average particle diameter.

  8. A CANDELS WFC3 Grism Study of Emission-Line Galaxies at Z approximates 2: A mix of Nuclear Activity and Low-Metallicity Star Formation

    NASA Technical Reports Server (NTRS)

    Trump, Jonathan R.; Weiner, Benjamin J.; Scarlata, Claudia; Kocevski, Dale D.; Bell, Eric F.; McGrath, Elizabeth J.; Koo, David C.; Faber, S. M.; Laird, Elise S.; Mozena, Mark; hide

    2011-01-01

    We present Hubble Space Telescope Wide Field Camera 3 slitless grism spectroscopy of 28 emission-line galaxies at z approximates 2, in the GOODS-S region of the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). The high sensitivity of these grism observations, with > 5-sigma detections of emission lines to f > 2.5 X 10(exp -18( erg/s/ square cm, means that the galaxies in the sample are typically approximately 7 times less massive (median M(star). = 10(exp 9.5)M(solar)) than previously studied z approximates 2 emission-line galaxies. Despite their lower mass, the galaxies have [O-III]/H-Beta ratios which are very similar to previously studied z approximates 2 galaxies and much higher than the typical emission-line ratios of local galaxies. The WFC3 grism allows for unique studies of spatial gradients in emission lines, and we stack the two-dimensional spectra of the galaxies for this purpose. In the stacked data the [O-III] emission line is more spatially concentrated than the H-Beta emission line with 98.1% confidence. We additionally stack the X-ray data (all sources are individually undetected), and find that the average L(sub [O-III])/L(sub 0.5.10keV) ratio is intermediate between typical z approximates 0 obscured active galaxies and star-forming galaxies. Together the compactness of the stacked [O-III] spatial profile and the stacked X-ray data suggest that at least some of these low-mass, low-metallicity galaxies harbor weak active galactic nuclei.

  9. Radon daughter plate-out measurements at SNOLAB for polyethylene and copper

    NASA Astrophysics Data System (ADS)

    Stein, Matthew; Bauer, Dan; Bunker, Ray; Calkins, Rob; Cooley, Jodi; Loer, Ben; Scorza, Silvia

    2018-02-01

    Polyethylene and copper samples were exposed to the underground air at SNOLAB for approximately three months while several environmental factors were monitored. Predictions of the radon-daughter plate-out rate are compared to the resulting surface activities, obtained from high-sensitivity measurements of alpha emissivity using the XIA UltraLo-1800 spectrometer at Southern Methodist University. From these measurements, we determine an average 210Pb plate-out rate of 249 and 423 atoms/day/cm2 for polyethylene and copper, respectively, when exposed to radon activity concentration of 135 Bq/m3 at SNOLAB. A time-dependent model of alpha activity is discussed for these materials placed in similar environmental conditions.

  10. Restorative maintenance retesting of 1977 model year passenger cars in Denver. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, G.T.

    1978-11-01

    The report describes the results of an exhaust emission testing program in which 24 relatively new vehicles sampled in a Restorative Maintenance Program in Denver were retested approximately one year later. Many vehicles had experienced maladjustments and disablements even though the owner reported that he felt his vehicle had been maintained according to the manufacturer's recommendations. Reductions in average emission levels followed the correction of the maladjustment and disablement actions to a point close to those after the prior tune-up. Modest fuel economy improvements were noted this year, probably due to the fact that the vehicles had overcome the 'greenmore » engine' effect.« less

  11. The shape of ion tracks in natural apatite

    NASA Astrophysics Data System (ADS)

    Schauries, D.; Afra, B.; Bierschenk, T.; Lang, M.; Rodriguez, M. D.; Trautmann, C.; Li, W.; Ewing, R. C.; Kluth, P.

    2014-05-01

    Small angle X-ray scattering measurements were performed on natural apatite of different thickness irradiated with 2.2 GeV Au swift heavy ions. The evolution of the track radius along the full ion track length was estimated by considering the electronic energy loss and the velocity of the ions. The shape of the track is nearly cylindrical, slightly widening with a maximum diameter approximately 30 μm before the ions come to rest, followed by a rapid narrowing towards the end within a cigar-like contour. Measurements of average ion track radii in samples of different thicknesses, i.e. containing different sections of the tracks are in good agreement with the shape estimate.

  12. A compound reconstructed prediction model for nonstationary climate processes

    NASA Astrophysics Data System (ADS)

    Wang, Geli; Yang, Peicai

    2005-07-01

    Based on the idea of climate hierarchy and the theory of state space reconstruction, a local approximation prediction model with the compound structure is built for predicting some nonstationary climate process. By means of this model and the data sets consisting of north Indian Ocean sea-surface temperature, Asian zonal circulation index and monthly mean precipitation anomaly from 37 observation stations in the Inner Mongolia area of China (IMC), a regional prediction experiment for the winter precipitation of IMC is also carried out. When using the same sign ratio R between the prediction field and the actual field to measure the prediction accuracy, an averaged R of 63% given by 10 predictions samples is reached.

  13. Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM

    NASA Astrophysics Data System (ADS)

    Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng

    2015-07-01

    We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.

  14. Analyzing indicator microorganisms, antibiotic resistant Escherichia coli, and regrowth potential of foodborne pathogens in various organic fertilizers.

    PubMed

    Miller, Cortney; Heringa, Spencer; Kim, Jinkyung; Jiang, Xiuping

    2013-06-01

    This study analyzed various organic fertilizers for indicator microorganisms, pathogens, and antibiotic-resistant Escherichia coli, and evaluated the growth potential of E. coli O157:H7 and Salmonella in fertilizers. A microbiological survey was conducted on 103 organic fertilizers from across the United States. Moisture content ranged from approximately 1% to 86.4%, and the average pH was 7.77. The total aerobic mesophiles ranged from approximately 3 to 9 log colony-forming units (CFU)/g. Enterobacteriaceae populations were in the range of <1 to approximately 7 log CFU/g, while coliform levels varied from <1 to approximately 6 log CFU/g. Thirty samples (29%) were positive for E. coli, with levels reaching approximately 6 log CFU/g. There were no confirmed positives for E. coli O157:H7, Salmonella, or Listeria monocytogenes. The majority of E. coli isolates (n=73), confirmed by glutamate decarboxylase (gad) PCR, were from group B1 (48%) and group A (32%). Resistance to 16 antibiotics was examined for 73 E. coli isolates, with 11 isolates having resistance to at least one antibiotic, 5 isolates to ≥ 2 antibiotics, and 2 isolates to ≥ 10 antibiotics. In the presence of high levels of background aerobic mesophiles, Salmonella and E. coli O157:H7 grew approximately 1 log CFU/g within 1 day of incubation in plant-based compost and fish emulsion-based compost, respectively. With low levels of background aerobic mesophiles, Salmonella grew approximately 2.6, 3.0, 3.0, and 3.2 log CFU/g in blood, bone, and feather meals and the mixed-source fertilizer, respectively, whereas E. coli O157:H7 grew approximately 4.6, 4.0, 4.0, and 4.8 log CFU/g, respectively. Our results revealed that the microbiological quality of organic fertilizers varies greatly, with some fertilizers containing antibiotic resistant E. coli and a few supporting the growth of foodborne pathogens after reintroduction into the fertilizer.

  15. A rat osteogenic cell line (UMR 106-01) synthesizes a highly sulfated form of bone sialoprotein.

    PubMed

    Midura, R J; McQuillan, D J; Benham, K J; Fisher, L W; Hascall, V C

    1990-03-25

    The rat osteosarcoma cell line (UMR 106-01) synthesizes and secretes relatively large amounts of a sulfated glycoprotein into its culture medium (approximately 240 ng/10(6) cells/day). This glycoprotein was purified, and amino-terminal sequence analysis identified it as bone sialoprotein (BSP). [35S]Sulfate, [3H]glucosamine, and [3H]tyrosine were used as metabolic precursors to label the BSP. Sulfate esters were found on N- and O-linked oligosaccharides and on tyrosine residues, with about half of the total tyrosines in the BSP being sulfated. The proportion of 35S activity in tyrosine-O-sulfate (approximately 70%) was greater than that in N-linked (approximately 20%) and O-linked (approximately 10%) oligosaccharides. From the deduced amino acid sequence for rat BSP (Oldberg, A., Franzén, A., and Heinegård, D. (1988) J. Biol. Chem. 263, 19430-19432), the results indicate that on average approximately 12 tyrosine residues, approximately 3 N-linked, and approximately 2 O-linked oligosaccharides are sulfated/molecule. The carboxyl-terminal quarter of the BSP probably contains most, if not all, of the sulfated tyrosine residues because this region of the polypeptide contains the necessary requirements for tyrosine sulfation. Oligosaccharide analyses indicated that for every N-linked oligosaccharide on the BSP, there are also approximately 2 hexa-, approximately 5 tetra-, and approximately 2 trisaccharides O-linked to serine and threonine residues. On average, the BSP synthesized by UMR 106-01 cells would contain a total of approximately 3 N-linked and approximately 25 of the above O-linked oligosaccharides. This large number of oligosaccharides is in agreement with the known carbohydrate content (approximately 50%) of the BSP.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.

    Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less

  17. Assessing environmental DNA detection in controlled lentic systems.

    PubMed

    Moyer, Gregory R; Díaz-Ferguson, Edgardo; Hill, Jeffrey E; Shea, Colin

    2014-01-01

    Little consideration has been given to environmental DNA (eDNA) sampling strategies for rare species. The certainty of species detection relies on understanding false positive and false negative error rates. We used artificial ponds together with logistic regression models to assess the detection of African jewelfish eDNA at varying fish densities (0, 0.32, 1.75, and 5.25 fish/m3). Our objectives were to determine the most effective water stratum for eDNA detection, estimate true and false positive eDNA detection rates, and assess the number of water samples necessary to minimize the risk of false negatives. There were 28 eDNA detections in 324, 1-L, water samples collected from four experimental ponds. The best-approximating model indicated that the per-L-sample probability of eDNA detection was 4.86 times more likely for every 2.53 fish/m3 (1 SD) increase in fish density and 1.67 times less likely for every 1.02 C (1 SD) increase in water temperature. The best section of the water column to detect eDNA was the surface and to a lesser extent the bottom. Although no false positives were detected, the estimated likely number of false positives in samples from ponds that contained fish averaged 3.62. At high densities of African jewelfish, 3-5 L of water provided a >95% probability for the presence/absence of its eDNA. Conversely, at moderate and low densities, the number of water samples necessary to achieve a >95% probability of eDNA detection approximated 42-73 and >100 L, respectively. Potential biases associated with incomplete detection of eDNA could be alleviated via formal estimation of eDNA detection probabilities under an occupancy modeling framework; alternatively, the filtration of hundreds of liters of water may be required to achieve a high (e.g., 95%) level of certainty that African jewelfish eDNA will be detected at low densities (i.e., <0.32 fish/m3 or 1.75 g/m3).

  18. On the fractal morphology of combustion-generated soot aggregates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koylu, U.O.

    1995-12-31

    The fractal properties of soot aggregates were investigated using ex-situ and in-situ experimental methods as well as computer simulations. Ex-situ experiments involved thermophoretic sampling and analysis by transmission electron microscopy (TEM), while in-situ measurements employed angular static light scattering and data inversion based on Rayleigh-Debye-Gans (RDG) approximation. Computer simulations used a sequential algorithm which mimics mass fractal-like structures. So from a variety of hydrocarbon-fueled laminar and turbulent nonpremixed flame environments were considered in the present study. The TEM analysis of projected soot images sampled from fuel-rich conditions of buoyant and weakly-buoyant laminar flames indicated that the fractal dimension of sootmore » was relatively independent of position in flames, fuel type and flame condition. These measurements yielded an average fractal dimension of 1.8, although other structure parameters such as the primary particle diameters and number of primary particles in aggregates had wide range of values. Fractal prefactor (lacunarity) was also measured for soot sampled from the fuel-lean conditions of turbulent flames, considering the actual morphology by tilting the samples during TEM analysis. These measurements yielded a fractal dimension of 1.65 and a lacunarity of 8.5, with experimental uncertainties (95% confidence) of 0.08 and 0.5, respectively. Relationships between the actual and projected structure properties of soot were also developed by combining TEM observations with numerical simulations. Practical approximate formulae were suggested to find radius of gyration of an aggregate from its maximum dimension, and number of primary particles in an aggregate from projected area. Finally, the fractal dimension and lacunarity of soot were obtained using light scattering for the same conditions of the above TEM measurements.« less

  19. The fate of wastewater-derived NDMA precursors in the aquatic environment.

    PubMed

    Pehlivanoglu-Mantas, Elif; Sedlak, David L

    2006-03-01

    To assess the stability of precursors of the chloramine disinfection byproduct N-nitrosodimethylamine (NDMA) under conditions expected in effluent-dominated surface waters, effluent samples from four municipal wastewater treatment plants were subjected to chlorination and chloramination followed by incubation in the presence of inocula derived from activated sludge. Samples subjected to free chlorine disinfection showed lower initial concentrations of NDMA precursors than those that were not chlorinated or were disinfected with pre-formed chloramines. For chloraminated and control (unchlorinated) treatments, the concentration of NDMA precursors decreased by an average of 24% over the 30-day incubation in samples from three of the four facilities. At the fourth facility, where samples were collected on three different days, NDMA precursor concentrations decreased by approximately 80% in one sample and decreased by less than 20% in the other two samples. In contrast to the low reactivity of the NDMA precursors, NDMA disappeared within 30 days under the conditions employed in these experiments. These results and measurements made in an effluent-dominated river suggest that although NDMA may be removed after wastewater effluent is discharged, wastewater-derived NDMA precursors could persist long enough to form significant concentrations of NDMA in drinking water treatment plants that use water originating from sources that are subjected to wastewater effluent discharges.

  20. Effect of gamma-irradiation on thermal decomposition kinetics, X-ray diffraction pattern and spectral properties of tris(1,2-diaminoethane)nickel(II)sulphate

    NASA Astrophysics Data System (ADS)

    Jayashri, T. A.; Krishnan, G.; Rema Rani, N.

    2014-12-01

    Tris(1,2-diaminoethane)nickel(II)sulphate was prepared, and characterised by various chemical and spectral techniques. The sample was irradiated with 60Co gamma rays for varying doses. Sulphite ion and ammonia were detected and estimated in the irradiated samples. Non-isothermal decomposition kinetics, X-ray diffraction pattern, Fourier transform infrared spectroscopy, electronic, fast atom bombardment mass spectra, and surface morphology of the complex were studied before and after irradiation. Kinetic parameters were evaluated by integral, differential, and approximation methods. Irradiation enhanced thermal decomposition, lowering thermal and kinetic parameters. The mechanism of decomposition is controlled by R3 function. From X-ray diffraction studies, change in lattice parameters and subsequent changes in unit cell volume and average crystallite size were observed. Both unirradiated and irradiated samples of the complex belong to trigonal crystal system. Decrease in the intensity of the peaks was observed in the infrared spectra of irradiated samples. Electronic spectral studies revealed that the M-L interaction is unaffected by irradiation. Mass spectral studies showed that the fragmentation patterns of the unirradiated and irradiated samples are similar. The additional fragment with m/z 256 found in the irradiated sample is attributed to S8+. Surface morphology of the complex changed upon irradiation.

  1. Is routine pathological evaluation of tissue from gynecomastia necessary? A 15-year retrospective pathological and literature review

    PubMed Central

    Senger, Jenna-Lynn; Chandran, Geethan; Kanthan, Rani

    2014-01-01

    OBJECTIVE: To reconsider the routine plastic surgical practice of requesting histopathological evaluation of tissue from gynecomastia. METHOD: The present study was a retrospective histopathological review (15-year period [1996 to 2012]) involving gynecomastia tissue samples received at the pathology laboratory in the Saskatoon Health Region (Saskatchewan). The Laboratory Information System (LIS) identified all specimens using the key search words “gynecomastia”, “gynaecomastia”, “gynecomazia” and “gynaecomazia”. A literature review to identify all cases of incidentally discovered malignancies in gynecomastia tissue specimens over a 15-year period (1996 to present) was undertaken. RESULTS: The 15-year LIS search detected a total of 452 patients that included two cases of pseudogynecomastia (0.4%). Patients’ age ranged from five to 92 years and 43% of the cases were bilateral (28% left sided, 29% right sided). The weight of the specimens received ranged from 0.2 g to 1147.2 g. All cases showed no significant histopathological concerns. The number of tissue blocks sampled ranged from one to 42, averaging four blocks/case (approximately $105/case), resulting in a cost of approximately $3,200/year, with a 15-year expenditure of approximately $48,000. The literature review identified a total of 15 incidental findings: ductal carcinoma in situ (12 cases), atypical ductal hyperplasia (two cases) and infiltrating ductal carcinoma (one case). CONCLUSIONS: In the context of evidence-based literature, and because no significant pathological findings were detected in this particular cohort of 452 cases with 2178 slides, the authors believe it is time to re-evaluate whether routine histopathological examination of tissue from gynecomastia remains necessary. The current climate of health care budget fiscal restraints warrants reassessment of the current policies and practices of sending tissue samples of gynecomastia incurring negative productivity costs on routine histopathological examination. PMID:25114624

  2. Polymerase chain reaction amplification of DNA from aged blood stains: quantitative evaluation of the "suitability for purpose" of four filter papers as archival media.

    PubMed

    Kline, Margaret C; Duewer, David L; Redman, Janette W; Butler, John M; Boyer, David A

    2002-04-15

    In collaboration with the Armed Forces Institute of Pathology's Department of Defense DNA Registry, the National Institute of Standards and Technology recently evaluated the performance of a short tandem repeat multiplex with dried whole blood stains on four different commercially available identification card matrixes. DNA from 70 stains that had been stored for 19 months at ambient temperature was extracted or directly amplified and then processed using routine methods. All four storage media provided fully typeable (qualitatively identical) samples. After standardization, the average among-locus fluorescence intensity (electropherographic peak height or area) provided a suitable metric for quantitative analysis of the relative amounts of amplifiable DNA in an archived sample. The amounts of DNA in Chelex extracts from stains on two untreated high-purity cotton linter pulp papers and a paper treated with a DNA-binding coating were essentially identical. Average intensities for the aqueous extracts from a paper treated with a DNA-releasing coating were somewhat lower but also somewhat less variable than for the Chelex extracts. Average intensities of directly amplified punches of the DNA-binding paper were much larger but somewhat more variable than the Chelex extracts. Approximately 25% of the observed variation among the intensity measurements is shared among the four media and thus can be attributed to intrinsic variation in white blood count among the donors. All of the evaluated media adequately "bank" forensically useful DNA in well-dried whole blood stains for at least 19 months at ambient temperature.

  3. Differences in liquor prices between control state-operated and license-state retail outlets in the United States.

    PubMed

    Siegel, Michael; DeJong, William; Albers, Alison B; Naimi, Timothy S; Jernigan, David H

    2013-02-01

    This study aims to compare the average price of liquor in the United States between retail alcohol outlets in states that have a monopoly ('control' states) with those that do not ('licence' states). A cross-sectional study of brand-specific alcohol prices in the United States. We determined the average prices in February 2012 of 74 brands of liquor among the 13 control states that maintain a monopoly on liquor sales at the retail level and among a sample of 50 license-state liquor stores, using their online-available prices. We calculated average prices for 74 brands of liquor by control versus license state. We used a random-effects regression model to estimate differences between control and license state prices-overall and by alcoholic beverage type. We also compared prices between the 13 control states. The overall mean price for the 74 brands was $27.79 in the license states [95% confidence interval (CI): $25.26-30.32] and $29.82 in the control states (95% CI: $26.98-32.66). Based on the random-effects linear regression model, the average liquor price was approximately $2 lower (6.9% lower) in license states. In the United States monopoly of alcohol retail outlets appears to be associated with slightly higher liquor prices. © 2012 The Authors, Addiction © 2012 Society for the Study of Addiction.

  4. Differences in liquor prices between control state-operated and license-state retail outlets in the U.S.

    PubMed Central

    Siegel, Michael; DeJong, William; Albers, Alison B.; Naimi, Timothy S.; Jernigan, David H.

    2012-01-01

    Aims This study aims to compare the average price of liquor in the United States between retail alcohol outlets in states that have a monopoly ('control' states) with those that do not ('licence' states). Design A cross-sectional study of brand-specific alcohol prices in the United States. Setting We determined the average prices in February 2012 of 74 brands of liquor among the 13 control states that maintain a monopoly on liquor sales at the retail level and among a sample of 50 license-state liquor stores, using their online-available prices. Measurements We calculated average prices for 74 brands of liquor by control vs. license state. We used a random effects regression model to estimate differences between control and license state prices – overall and by alcoholic beverage type. We also compared prices between the 13 control states. Findings The overall mean price for the 74 brands was $27.79 in the license states (95% confidence interval [CI], $25.26–$30.32) and $29.82 in the control states (95% CI, $26.98–$32.66). Based on the random effects linear regression model, the average liquor price was approximately two dollars lower (6.9% lower) in license states. Conclusions In the United States monopoly of alcohol retail outlets appears to be associated with slightly higher liquor prices. PMID:22934914

  5. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    PubMed

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit drugs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Evaluation of sodium hydroxide-N-acetyl-l-cysteine and 0.7% chlorhexidine decontamination methods for recovering Mycobacterium tuberculosis from sputum samples: A comparative analysis (The Gambia Experience).

    PubMed

    Gitteh, Ensa; Kweku Otu, Jacob; Jobarteh, Tijan; Mendy, Francis; Faal-Jawara, Isatou Tutty; Ofori-Anyinam, Nana Boatema; Ayorinde, Abigail; Secka, Ousman; Gehre, Florian

    2016-12-01

    To determine the culture yield and time to detection of mycobacterial growth between samples decontaminated using 0.7% chlorhexidine and sodium hydroxide-N-acetyl-l-cysteine (NaOH-NALC) and cultured on the Löwenstein-Jensen (LJ) medium. We also aimed to determine the contamination rate between the 0.7% chlorhexidine and NaOH-NALC decontamination methods. The study was carried out on 68 sputa samples (42 smear positives and 26 smear negatives). Of these 68 samples, 46 were collected from men and 26 from women with an approximate average age of 27years. All the sputum samples were decontaminated using the standard NaOH-NALC and 0.7% chlorhexidine methods. The concentrates were cultured in parallel on LJ media in which reading of the slope for mycobacterial growth was obtained daily for the first 2weeks and then weekly until week 8. The mycobacterial recovery rate, time to detection, and contamination rate were then compared. The overall recovery rate of mycobacterial growth on samples treated with both decontamination methods inoculated on LJ media is 51.5% (35/68). Specifically, mycobacterial growth rates on samples treated with 0.7% chlorhexidine and standard NaOH-NALC on LJ media were 61.8% (42/68) and 54.4% (37/68), respectively. However, the growth of Mycobacterium tuberculosis complex was faster on samples treated with 0.7% chlorhexidine than those treated with NaOH-NALC (average, 32±5days vs. 33±5.2days, respectively). The contamination rate on samples treated with 0.7% chlorhexidine was 1.5% (1/68), whereas on those treated with NaOH-NALC, the rate was 4.4% (3/68). The 0.7% chlorhexidine decontamination method is rapid and has less contamination rate in terms of mycobacterial recovery compared with the standard NaOH-NALC method. Therefore, the 0.7% chlorhexidine decontamination method would be an ideal alternative option for decontamination of sputum samples and recovery/isolation of M. tuberculosis in resource-poor countries. Copyright © 2016.

  7. In Search of a Dipole Field during the Plio-Pleistocene

    NASA Astrophysics Data System (ADS)

    Asefaw, H. F.; Tauxe, L.; Staudigel, H.; Shaar, R.; Cai, S.; Cromwell, G.; Behar, N.; Koppers, A. A. P.

    2017-12-01

    A geocentric axial dipole (GAD) field accounts for the majority of the modern field and is assumed to be a good first order approximation for the time averaged ancient field. A GAD field predicts a latitudinal dependence of intensity. Given this relationship, the intensity of the field measured at the North and South poles should be twice as strong as the intensity recorded at the equator. The current paleointensity database- archived at both http://earth.liv.ac.uk/pint/ and http://earthref.org/MagIC - shows no such dependency over the last 5 Myr (e.g. Lawrence et al., 2009, doi: 10.1029/2008GC002072; Cromwell et al., 2015; doi: 10.1002/2014JB011828). In order to investigate whether better experimental protocol or data selection approaches could resolve the problem, we: 1) applied a new data selection protocol (CCRIT) which has recovered historical field values with high precision and accuracy (Cromwell et al., 2015), 2) re-sampled the fine grained tops of lava flows in Antarctica (77.9° S) that were previously studied for paleodirections but failed to meet our strict selection criteria, 3) sampled cinder cones in the Golan Heights (33.08° N), and 4) acquired data from lava flows from the HSDP2 drill core in Hawaii (19.71° N ). New and published Ar-Ar dates demonstrate that all the samples formed in the last 5 Myr. We conducted IZZI modified Thellier-Thellier experiments and then calculated paleointensities from the samples that passed a set of strict selection criteria. After applying the CCRIT criteria to our data, we find a time averaged paleointensity of 35.7 ±6.86 μT in the Golan Heights, 34.5 μT in Hawaii, and 34.22 ±3.4 μT in Antarctica. New results from Iceland (64° N), published by Cromwell et al. (2015, doi: 10.1002/2014JB011828), also pass the CCRIT criteria and record an average intensity of 33.1 ± 8.3 μT. The average paleointensities from the Golan Heights, Antarctica, Iceland and Hawaii, that span the last 5 Myr and pass the CCRIT criteria, fail to show the variation of intensity with latitude that is expected of an ideal GAD field. The question remains as to why.

  8. High pressure-assisted transfer of ultraclean chemical vapor deposited graphene

    NASA Astrophysics Data System (ADS)

    Chen, Zhiying; Ge, Xiaoming; Zhang, Haoran; Zhang, Yanhui; Sui, Yanping; Yu, Guanghui; Jin, Zhi; Liu, Xinyu

    2016-03-01

    We develop a high pressure-assisted (approximately 1000 kPa) transfer method to remove polymer residues and effectively reduce damages on the surface of graphene. By introducing an ethanol pre-dehydration technique and optimizing temperature, the graphene surface becomes nearly free of residues, and the quality of graphene is improved obviously when temperature reaches 140 °C. The graphene obtained using the high pressure-assisted transfer method also exhibits excellent electrical properties with an average sheet resistance of approximately 290 Ω/sq and a mobility of 1210 cm2/V.s at room temperature. Sheet resistance and mobility are considerably improved compared with those of the graphene obtained using the normal wet transfer method (average sheet resistance of approximately 510 ohm/sq and mobility of 750 cm2/V.s).

  9. Methods for calculating the absolute entropy and free energy of biological systems based on ideas from polymer physics.

    PubMed

    Meirovitch, Hagai

    2010-01-01

    The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently, HSMD is being extended for calculating the absolute and relative free energies of ligand-enzyme binding. We describe the whole approach and discuss future directions. 2009 John Wiley & Sons, Ltd.

  10. Engineering scale demonstration of a prospective Cast Stone process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cozzi, A.; Fowley, M.; Hansen, E.

    2014-09-30

    This report documents an engineering-scale demonstration with non-radioactive simulants that was performed at SRNL using the Scaled Continuous Processing Facility (SCPF) to fill an 8.5 ft container with simulated Cast Stone grout. The Cast Stone formulation was chosen from the previous screening tests. Legacy salt solution from previous Hanford salt waste testing was adjusted to correspond to the average composition generated from the Hanford Tank Waste Operation Simulator (HTWOS). The dry blend materials, ordinary portland cement (OPC), Class F fly ash, and ground granulated blast furnace slag (GGBFS or BFS), were obtained from Lafarge North America in Pasco, WA. Overmore » three days, the SCPF was used to fill a 1600 gallon container, staged outside the facility, with simulated Cast Stone grout. The container, staged outside the building approximately 60 ft from the SCPF, was instrumented with x-, y-, and z-axis thermocouples to monitor curing temperature. The container was also fitted with two formed core sampling vials. For the operation, the targeted grout production rate was 1.5 gpm. This required a salt solution flow rate of approximately 1 gpm and a premix feed rate of approximately 580 lb/h. During the final day of operation, the dry feed rate was increased to evaluate the ability of the system to handle increased throughput. Although non-steady state operational periods created free surface liquids, no bleed water was observed either before or after operations. The final surface slope at a fill height of 39.5 inches was 1-1.5 inches across the 8.5 foot diameter container, highest at the final fill point and lowest diametrically opposed to the fill point. During processing, grout was collected in cylindrical containers from both the mixer discharge and the discharge into the container. These samples were stored in a humid environment either in a closed box proximal to the container or inside the laboratory. Additional samples collected at these sampling points were analyzed for rheological properties and density. Both the rheological properties (plastic viscosity and yield strength) and density were consistent with previous and later SCPF runs.« less

  11. Evaluation of Criteria for the Detection of Fires in Underground Conveyor Belt Haulageways

    PubMed Central

    Litton, Charles D.; Perera, Inoka Eranda

    2015-01-01

    Large-scale experiments were conducted in an above-ground gallery to simulate typical fires that develop along conveyor belt transport systems within underground coal mines. In the experiments, electrical strip heaters, imbedded ~5 cm below the top surface of a large mass of coal rubble, were used to ignite the coal, producing an open flame. The flaming coal mass subsequently ignited 1.83-meter-wide conveyor belts located approximately 0.30 m above the coal surface. Gas samples were drawn through an averaging probe located approximately 20 m downstream of the coal for continuous measurement of CO, CO2, and O2 as the fire progressed through the stages of smoldering coal, flaming coal, and flaming conveyor belt. Also located approximately 20 m from the fire origin and approximately 0.5 m below the roof of the gallery were two commercially available smoke detectors, a light obscuration meter, and a sampling probe for measurement of total mass concentration of smoke particles. Located upstream of the fire origin and also along the wall of the gallery at approximately 14 m and 5 m upstream were two video cameras capable of both smoke and flame detection. During the experiments, alarm times of the smoke detectors and video cameras were measured while the smoke obscuration and total smoke mass were continually measured. Twelve large-scale experiments were conducted using three different types of fire-resistant conveyor belts and four air velocities for each belt. The air velocities spanned the range from 1.0 m/s to 6.9 m/s. The results of these experiments are compared to previous large-scale results obtained using a smaller fire gallery and much narrower (1.07-m) conveyor belts to determine if the fire detection criteria previously developed (1) remained valid for the wider conveyor belts. Although some differences between these and the previous experiments did occur, the results, in general, compare very favorably. Differences are duly noted and their impact on fire detection discussed. PMID:26566298

  12. Impact of dehydration on a full body resistance exercise protocol.

    PubMed

    Kraft, Justin A; Green, James M; Bishop, Phillip A; Richardson, Mark T; Neggers, Yasmin H; Leeper, James D

    2010-05-01

    This study examined effects of dehydration on a full body resistance exercise workout. Ten males completed two trials: heat exposed (with 100% fluid replacement) (HE) and dehydration (approximately 3% body mass loss with no fluid replacement) (DEHY) achieved via hot water bath (approximately 39 degrees C). Following HE and DEHY, participants performed three sets to failure (using predetermined 12 repetition maximum) of bench press, lat pull down, overhead press, barbell curl, triceps press, and leg press with a 2-min recovery between each set and 2 min between exercises. A paired t test showed total repetitions (all sets combined) were significantly lower for DEHY: (144.1 +/- 26.6 repetitions) versus HE: (169.4 +/- 29.1 repetitions). ANOVAs showed significantly lower repetitions (approximately 1-2 repetitions on average) per exercise for DEHY versus HE (all exercises). Pre-set rate of perceived exertion (RPE) and pre-set heart rate (HR) were significantly higher [approximately 0.6-1.1 units on average in triceps press, leg press, and approached significance in lat pull down (P = 0.14) and approximately 6-13 b min(-1) on average in bench press, lat pull down, triceps press, and approached significance for overhead press (P = 0.10)] in DEHY versus HE. Session RPE difference approached significance (DEHY: 8.6 +/- 1.9, HE: 7.4 +/- 2.3) (P = 0.12). Recovery HR was significantly higher for DEHY (116 +/- 15 b min(-1)) versus HE (105 +/- 13 b min(-1)). Dehydration (approximately 3%) impaired resistance exercise performance, decreased repetitions, increased perceived exertion, and hindered HR recovery. Results highlight the importance of adequate hydration during full body resistance exercise sessions.

  13. The Origin and Age of Scallop Floodplain Benches from Difficult Run, Fairfax County, Virginia.

    NASA Astrophysics Data System (ADS)

    Scamardo, J. E.; Pizzuto, J. E.; Skalak, K.; Benthem, A.

    2015-12-01

    Sediment is deposited within scallop-shaped erosional scarps that form between trees armoring the banks of Difficult Run, a suburban watershed with a forested riparian zone. These deposits create small (surface area 85 m2, volume 300 m3), low-lying floodplain landforms this group terms Scallop Floodplain Benches (SFB). It is hypothesized that SFB formed within the past couple decades initially forming as transversal accretion deposits and eventually gaining floodplain features dominated by vertical accretion. Stratigraphic data supports that SFB deposits begin laterally as sand and gravel bars approximately 100 cm thick, and continue to grow by vertical accretion of sand, silt, and clay. As a SFB reaches its maximum height, a distinctive levee develops adjacent to the channel, and fine-grained silt and clay are deposited behind the levee. Core samples to a depth of 118 cm and additional samples from an overbank event that occurred on June 20, 2015 were collected from one of two SFB on Difficult Run near Leesburg Pike. The grain size distribution was measured using a Coulter Counter and activities of Pb-210, Cs-137, and Be-7 were measured using High Purity Germanium Detectors. Cs-137 activities are relatively constant with depth without a well-defined peak, suggesting that the SFB was deposited after 1963. Be-7 is present in the recent flood deposits, but is absent below the surface, suggesting that the SFB deposits are at least several years old. Excess Pb-210 activities decrease exponentially with depth, and can be fit using the Constant Rate of Supply method to determine an average age of approximately 13.5 years for the SFB. The SFB is storing sediment at a rate of 27 tons/year, which is equal to 0.35% of the annual sediment load of Difficult Run, based on this average age. SFB appear to be a significant component of the sediment storage of Difficult Run and therefore should be considered in the sediment budget.

  14. Spatial distribution and partition of perfluoroalkyl acids (PFAAs) in rivers of the Pearl River Delta, southern China.

    PubMed

    Liu, Baolin; Zhang, Hong; Xie, Liuwei; Li, Juying; Wang, Xinxuan; Zhao, Liang; Wang, Yanping; Yang, Bo

    2015-08-15

    This study investigated the occurrence of perfluoroalkyl acids (PFAAs) in surface water from 67 sampling sites along rivers of the Pearl River Delta in southern China. Sixteen PFAAs, including perfluoroalkyl carboxylic acids (PFCAs, C5-14, C16 and C18) and perfluoroalkyl sulfonic acids (PFSAs, C4, C6, C8 and C10) were determined by high performance liquid chromatography-negative electrospray ionization-tandem mass spectrometry (HPLC/ESI-MS/MS). Total PFAA concentrations (∑ PFAAs) in the surface water ranged from 1.53 to 33.5 ng·L(-1) with an average of 7.58 ng·L(-1). Perfluorobutane sulfonic acid (PFBS), perfluorooctanoic acid (PFOA), and perfluorooctane sulfonic acid (PFOS) were the three most abundant PFAAs and on average accounted for 28%, 16% and 10% of ∑ PFAAs, respectively. Higher concentrations of ∑ PFAAs were found in the samples collected from Jiangmen section of Xijiang River, Dongguan section of Dongjiang River and the Pearl River flowing the cities which had very well-developed manufacturing industries. PCA model was employed to quantitatively calculate the contributions of extracted sources. Factor 1 (72.48% of the total variance) had high loading for perfluorohexanoic acid (PFHxA), perfluoropentanoic acid (PFPeA), PFBS and PFOS. For factor 2 (10.93% of the total variance), perfluorononanoic acid (PFNA) and perfluoroundecanoic acid (PFUdA) got high loading. The sorption of PFCAs on suspended particulate matter (SPM) increased by approximately 0.1 log units for each additional CF2 moiety and that on sediment was approximately 0.8 log units lower than the SPM logKd values. In addition, the differences in the partition coefficients were influenced by the structure discrepancy of absorbents and influx of fresh river water. These data are essential for modeling the transport and environmental fate of PFAAs. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Measuring spatial and temporal trends of nicotine and alcohol consumption in Australia using wastewater-based epidemiology.

    PubMed

    Lai, Foon Yin; Gartner, Coral; Hall, Wayne; Carter, Steve; O'Brien, Jake; Tscharke, Benjamin J; Been, Frederic; Gerber, Cobus; White, Jason; Thai, Phong; Bruno, Raimondo; Prichard, Jeremy; Kirkbride, K Paul; Mueller, Jochen F

    2018-06-01

    Tobacco and alcohol consumption remain priority public health issues world-wide. As participation in population-based surveys has fallen, it is increasingly challenging to estimate accurately the prevalence of alcohol and tobacco use. Wastewater-based epidemiology (WBE) is an alternative approach for estimating substance use at the population level that does not rely upon survey participation. This study examined spatio-temporal patterns in nicotine (a proxy for tobacco) and alcohol consumption in the Australian population via WBE. Daily wastewater samples (n = 164) were collected at 18 selected wastewater treatment plants across Australia, covering approximately 45% of the total population. Nicotine and alcohol metabolites in the samples were measured using liquid chromatography-tandem mass spectrometry. Daily consumption of nicotine and alcohol and its associated uncertainty were computed using Monte Carlo simulations. Nation-wide daily average and weekly consumption of these two substances were extrapolated using ordinary least squares and mixed-effect models. Nicotine and alcohol consumption was observed in all communities. Consumption of these substances in rural towns was three to four times higher than in urban communities. The spatial consumption pattern of these substances was consistent across the monitoring periods in 2014-15. Nicotine metabolites significantly reduced by 14-25% (P = 0.001-0.008) (2014-15) in some catchments. Alcohol consumption remained constant over the studied periods. Strong weekly consumption patterns were observed for alcohol but not nicotine. Nation-wide, the daily average consumption per person (aged 15-79 years) was estimated at approximately 2.5 cigarettes and 1.3-2.0 standard drinks (weekday-weekend) of alcohol. These estimates were close to the sale figure and apparent consumption, respectively. Wastewater-based epidemiology is a feasible method for objectively evaluating the geographic, temporal and weekly profiles of nicotine and alcohol consumption in different communities nationally. © 2018 Society for the Study of Addiction.

  16. Self-consistent approximation beyond the CPA: Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, T.; Gray, L.J.

    1981-08-01

    In Part I, Professor Leath has described the substantial efforts to generalize the CPA. In this second part, a particular self-consistent approximation for random alloys developed by Kaplan, Leath, Gray, and Diehl is described. This approximation is applicable to diagonal, off-diagonal and environmental disorder, includes cluster scattering, and yields a translationally invariant and analytic (Herglotz) average Green's function. Furthermore Gray and Kaplan have shown that an approximation for alloys with short-range order can be constructed from this theory.

  17. Approximating natural connectivity of scale-free networks based on largest eigenvalue

    NASA Astrophysics Data System (ADS)

    Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.

    2016-06-01

    It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.

  18. Assessing the Ability of Instantaneous Aircraft and Sonde Measurements to Characterize Climatological Means and Long-Term Trends in Tropospheric Composition

    NASA Technical Reports Server (NTRS)

    Murray, Lee T.; Fiore, Arlene M.

    2014-01-01

    Over four decades of measurements exist that sample the 3-D composition of reactive trace gases in the troposphere from approximately weekly ozone sondes, instrumentation on civil aircraft, and individual comprehensive aircraft field campaigns. An obstacle to using these data to evaluate coupled chemistry-climate models (CCMs)the models used to project future changes in atmospheric composition and climateis that exact space-time matching between model fields and observations cannot be done, as CCMs generate their own meteorology. Evaluation typically involves averaging over large spatiotemporal regions, which may not reflect a true average due to limited or biased sampling. This averaging approach generally loses information regarding specific processes. Here we aim to identify where discrete sampling may be indicative of long-term mean conditions, using the GEOS-Chem global chemical-transport model (CTM) driven by the MERRA reanalysis to reflect historical meteorology from 2003 to 2012 at 2o by 2.5o resolution. The model has been sampled at the time and location of every ozone sonde profile available from the Would Ozone and Ultraviolet Radiation Data Centre (WOUDC), along the flight tracks of the IAGOSMOZAICCARABIC civil aircraft campaigns, as well as those from over 20 individual field campaigns performed by NASA, NOAA, DOE, NSF, NERC (UK), and DLR (Germany) during the simulation period. Focusing on ozone, carbon monoxide and reactive nitrogen species, we assess where aggregates of the in situ data are representative of the decadal mean vertical, spatial and temporal distributions that would be appropriate for evaluating CCMs. Next, we identically sample a series of parallel sensitivity simulations in which individual emission sources (e.g., lightning, biogenic VOCs, wildfires, US anthropogenic) have been removed one by one, to assess where and when the aggregated observations may offer constraints on these processes within CCMs. Lastly, we show results of an additional 31-year simulation from 1980-2010 of GEOS-Chem driven by the MACCity emissions inventory and MERRA reanalysis at 4o by 5o. We sample the model at every WOUDC sonde and flight track from MOZAIC and NASA field campaigns to evaluate which aggregate observations are statistically reflective of long-term trends over the period.

  19. [Study on the appropriate parameters of automatic full crown tooth preparation for dental tooth preparation robot].

    PubMed

    Yuan, F S; Wang, Y; Zhang, Y P; Sun, Y C; Wang, D X; Lyu, P J

    2017-05-09

    Objective: To further study the most suitable parameters for automatic full crown preparation using oral clinical micro robot. Its purpose is to improve the quality of automated tooth preparing for the system and to lay the foundation for clinical application. Methods: Twenty selected artificial resin teeth were used as sample teeth. The micro robot automatic tooth preparation system was used in dental clinic to control the picosecond laser beam to complete two dimensional cutting on the resin tooth sample according to the motion planning path. Using the laser scanning measuring microscope, each layer of cutting depth values was obtained and the average value was calculated. The monolayer cutting depth was determined. The three-dimensional (3D) data of the target resin teeth was obtained using internal scanner, and the CAD data of full-crown tooth preparation was designed by CAD self-develged software. According to the depth of the single layer, 11 complete resin teeth in phantom head were automatically prepared by the robot controlling the laser focused spot in accordance with the layer-cutting way. And the accuracy of resin tooth preparation was evaluated with the software. Using the same method, monolayer cutting depth parameter for cutting dental hard tissue was obtained. Then 15 extracted mandibular and maxillary first molars went through automatic full crown tooth preparation. And the 3D data of tooth preparations were obtained with intra oral scanner. The software was used to evaluate the accuracy of tooth preparation. Results: The results indicated that the single cutting depth of cutting resin teeth and in vitro teeth by picosecond laser were (60.0±2.6) and (45.0±3.6) μm, respectively. Using the tooth preparation robot, 11 artificial resin teeth and 15 complete natural teeth were automatically prepared, and the average time were (13.0±0.7), (17.0±1.8) min respectively. Through software evaluation, the average preparation depth of the occlusal surface of 11 resin teeth was approximately (2.089±0.026) mm, the error was about (0.089±0.026) mm; the average convergence angle was about 6.56°±0.30°, the error was about 0.56°±0.30°. Compared with the target preparation shape, the average shape error of the 11 resin tooth preparations was about 0.02-0.11 mm. And the average preparation depth of the occlusal surface of 15 natural teeth was approximately (2.097±0.022) mm, the error was about (0.097±0.022) mm; the average convergence angle was about 6.98°±0.35°, the error was about 0.98°±0.35°. Compared with the target preparation shape, the average shape error of the 15 natural tooth preparations was about 0.05-0.17 mm. Conclusions: The experimental results indicate that the automatic tooth preparation for resin teeth and the teeth were completed according to the specific parameters of the single cutting depth by the micro robot controlling picosecond laser respectively, its preparation accuracy met the clinical needs. And the suitability of the parameter was confirmed.

  20. Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness

    PubMed Central

    Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.

    2014-01-01

    Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966

  1. Martian tidal pressure and wind fields obtained from the Mariner 9 infrared spectroscopy experiment

    NASA Technical Reports Server (NTRS)

    Pirraglia, J. A.; Conrath, B. J.

    1973-01-01

    Using temperature fields derived from the Mariner 9 infrared spectroscopy experiment, the Martian atmospheric tidal pressure and wind fields are calculated. Temperature as a function of local time, latitude, and atmospheric pressure level is obtained by secular and longitudinal averaging of the data. The resulting temperature field is approximated by a spherical harmonic expansion, retaining one symmetric and one asymmetric term for wavenumber zero and wavenumber one. Vertical averaging of the linearized momentum and continuity equations results in an inhomogeneous tidal equation for surface pressure fluctuations with the driving function related to the temperature field through the geopotential function and the hydrostatic equation. Solutions of the tidal equation show a diurnal fractional pressure amplitude approximately equal to one half of the vertically averaged diurnal fractional temperature amplitude.

  2. Martian tidal pressure and wind fields obtained from the Mariner 9 infrared spectroscopy experiment

    NASA Technical Reports Server (NTRS)

    Pirraglia, J. A.; Conrath, B. J.

    1974-01-01

    Using temperature fields derived from the Mariner 9 infrared spectroscopy experiment, the Martian atmospheric tidal pressure and wind fields are calculated. Temperature as a function of local time, latitude, and atmospheric pressure level is obtained by secular and longitudinal averaging of the data. The resulting temperature field is approximated by a spherical harmonic expansion, retaining one symmetric and one asymmetric term each for wavenumber zero and wavenumber one. Vertical averaging of the linearized momentum and continuity equations results in an inhomogeneous tidal equation for surface pressure fluctuations with the driving function related to the temperature field through the geopotential function and the hydrostatic equation. Solutions of the tidal equation show a diurnal fractional pressure amplitude approximately equal to one-half the vertically averaged diurnal fractional temperature amplitude.

  3. The Mean Southern Italian Children IQ Is Not Particularly Low: A Reply to R. Lynn (2010)

    ERIC Educational Resources Information Center

    Cornoldi, Cesare; Belacchi, Carmen; Giofre, David; Martini, Angela; Tressoldi, Patrizio

    2010-01-01

    Working with data from the PISA study (OECD, 2007), Lynn (2010) has argued that individuals from South Italy average an IQ approximately 10 points lower than individuals from North Italy, and has gone on to put forward a series of conclusions on the relationship between average IQ, latitude, average stature, income, etc. The present paper…

  4. Midsouth Pulpwood Prices, 1987

    Treesearch

    John S. Vissage

    1990-01-01

    In 1987, the average price per cord of Midsouth pulpwood was $47.47, an increase of less than 1 percent from the 1988 price. The average price per green ton of chipped residues decreased less than 1 percent to $2164. The average price of other residues remained at $10.25 per green ton. The total expenditure for pulpwood in the Midsouth increased approximately 1...

  5. IMPACT OF CRITICAL ANION SOIL SOLUTION CONCENTRATION ON ALUMINUM ACTIVITY IN ALPINE TUNDRA SOIL Andrew Evans, Jr.1 , Michael B. Jacobs2, and Jason R. Janke1, (1) Metropolitan State University of Denver, Dept. of Earth and Atmospheric Sciences, (2) Dept. of Chemistry, Denver, CO, United States.

    NASA Astrophysics Data System (ADS)

    Evans, A.

    2015-12-01

    Soil solution anionic composition can impact both plant and microbial activity in alpine tundra soils by altering biochemical cycling within the soil, either through base cation leaching, or shifts in aluminum controlling solid phases. Although anions play a critical role in the aqueous speciation of metals, relatively few high altitude field studies have examined their impact on aluminum controlling solid phases and aluminum speciation in soil water. For this study, thirty sampling sites were selected on Trail Ridge Road in Rocky Mountain National Park, Estes Park, CO, and sampled during July, the middle of the growing season. Sampling elevations ranged from approximately 3560 - 3710 m. Soil samples were collected to a depth of 15.24 cm, and the anions were extracted using a 2:1 D.I. water to soil ratio. Filtered extracts were analyzed using IC and ICP-MS. Soil solution NO3- concentrations were significantly higher for sampling locations east of Iceberg Pass (EIBP) (mean = 86.94 ± 119.8 mg/L) compared to locations west of Iceberg Pass (WIBP) (mean 1.481 ± 2.444 mg/L). Both F- and PO43- soil solution concentrations, 0.533 and 0.440 mg/L, respectively, were substantially lower, for sampling sites located EIBP, while locations WIBP averaged 0.773 and 0.829 mg/L respectively, for F- and PO43-. Sulfate concentration averaged 3.869 ± 3.059 mg/L for locations EIBP, and 3.891 ± 3.1970 for locations WIBP. Geochemical modeling of Al3+ in the soil solution indicated that a suite of aluminum hydroxyl sulfate minerals controlled Al3+ activity in the alpine tundra soil, with shifts between controlling solid phases occurring in the presence of elevated F- concentrations.

  6. Airborne concentrations of benzene due to diesel locomotive exhaust in a roundhouse.

    PubMed

    Madl, Amy K; Paustenbach, Dennis J

    2002-12-13

    Concentrations of airborne benzene due to diesel exhaust from a locomotive were measured during a worst-case exposure scenario in a roundhouse. To understand the upper bound human health risk due to benzene, an electromotive diesel and a General Electric four-cycle turbo locomotive were allowed to run for four 30-min intervals during an 8-h workshift in a roundhouse. Full-shift and 1-h airborne concentrations of benzene were measured in the breathing zone of surrogate locomotive repairmen over the 8-h workshift on 2 consecutive days. In addition, carbon monoxide was measured continuously; elemental carbon (surrogate for diesel exhaust) was sampled with full-shift area samples; and nitrogen dioxide/nitric oxide was sampled using full-shift and 15-min (nitrogen dioxide only) area samples. Peak concentrations of carbon monoxide ranged from 22.5 to 93 ppm. The average concentration of elemental carbon for each day of the roundhouse study was 0.0543 and 0.0552 microg/m(3 )for an 8-h workshift. These were considered "worst-case" conditions since the work environment was intolerably irritating to the eyes, nose, and throat. Short-term nitrogen dioxide concentrations ranged from 0.81 to 2.63 ppm during the diesel emission events with the doors closed. One-hour airborne benzene concentrations ranged from 0.001 to 0.015 ppm with 45% of the measurements below the detection limit of 0.002-0.004 ppm. Results indicated that the 8-h time-weighted average for benzene in the roundhouse was approximately 100-fold less than the current threshold limit value (TLV) of 0.5 ppm. These data are consistent with other studies, which have indicated that benzene concentrations due to diesel emissions, even in a confined environment, are quite low.

  7. Sedimentation and sediment chemistry, Neopit Mill Pond, Menominee Indian Reservation, Wisconsin, 2001

    USGS Publications Warehouse

    Fitzpatrick, Faith A.; Peppler, Marie C.

    2003-01-01

    The volume, texture, and chemistry of sediment deposited in a mill pond on the West Branch of the Wolf River at Neopit, Wis., Menominee Reservation, were studied in 2001-2002. The study was accomplished by examining General Land Office Survey Notes from 1854, establishing 12 transects through the mill pond, conducting soundings of the soft and hard bottom along each transect, and collecting core samples for preliminary screening of potential contaminants. Combined information from transects, cores, and General Land Office Survey notes were used to reconstruct the pre-dam location of the West Branch of the Wolf River through the mill pond. Neopit Mill Pond contains approximately 253 acre-ft of organic-rich muck, on average about 1.2 ft thick, that was deposited after the dam was built. Elevated concentrations of polycyclic aromatic hydrocarbons (PAHs) associated with creosote and pentachlorophenol were found in post-dam sediment samples collected from Neopit Mill Pond. Trace-element concentrations were at or near background concentrations. Further study and sampling are needed to identify the spatial extent and variability of the PAHs, pentachlorophenol, and other byproducts from wood preservatives

  8. On optimizing the blocking step of indirect enzyme-linked immunosorbent assay for Epstein-Barr virus serology.

    PubMed

    Lim, Chun Shen; Krishnan, Gopala; Sam, Choon Kook; Ng, Ching Ching

    2013-01-16

    Because blocking agent occupies most binding surface of a solid phase, its ability to prevent nonspecific binding determines the signal-to-noise ratio (SNR) and reliability of an enzyme-linked immunosorbent assay (ELISA). We demonstrate a stepwise approach to seek a compatible blocking buffer for indirect ELISA, via a case-control study (n=176) of Epstein-Barr virus (EBV)-associated nasopharyngeal carcinoma (NPC). Regardless of case-control status, we found that synthetic polymer blocking agents, mainly Ficoll and poly(vinyl alcohol) (PVA) were able to provide homogeneous backgrounds among samples, as opposed to commonly used blocking agents, notably nonfat dry milk (NFDM). The SNRs for NPC samples that correspond to blocking using PVA were approximately 3-fold, on average, higher than those blocking using NFDM. Both intra- and inter-assay precisions of PVA-based assays were <14%. A blocking agent of choice should have tolerable sample backgrounds from both cases and controls to ensure the reliability of an immunoassay. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Uranium mobility during interaction of rhyolitic obsidian, perlite and felsite with alkaline carbonate solution: T = 120° C, P = 210 kg/cm2

    USGS Publications Warehouse

    Zielinski, Robert A.

    1979-01-01

    Well-characterized samples of rhyolitic obsidian, perlite and felsite from a single lava flow are leached of U by alkaline oxidizing solutions under open-system conditions. Pressure, temperature, flow rate and solution composition are held constant in order to evaluate the relative importance of differences in surface area and crystallinity. Under the experimental conditions U removal from crushed glassy samples proceeds by a mechanism of glass dissolution in which U and silica are dissolved in approximately equal weight fractions. The rate of U removal from crushed glassy samples increases with decreasing average grain size (surface area). Initial rapid loss of a small component (≈ 2.5%) of the total U from crushed felsite. followed by much slower U loss, reflects variable rates of attack of numerous uranium sites. The fractions of U removed during the experiment ranged from 3.2% (felsite) to 27% (perlite). An empirical method for evaluating the relative rate of U loss from contemporaneous volcanic rocks is presented which incorporates leaching results and rock permeability data.

  10. Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.

    2015-06-01

    The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.

  11. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model.

    PubMed

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  12. Exposure to airborne asbestos in thermal power plants in Mongolia

    PubMed Central

    Damiran, Naransukh; Silbergeld, Ellen K; Frank, Arthur L; Lkhasuren, Oyuntogos; Ochir, Chimedsuren; Breysse, Patrick N

    2015-01-01

    Background: Coal-fired thermal power plants (TPPs) in Mongolia use various types of asbestos-containing materials (ACMs) in thermal insulation of piping systems, furnaces, and other products. Objective: To investigate the occupational exposure of insulation workers to airborne asbestos in Mongolian power plants. Methods: Forty-seven air samples were collected from four power plants in Mongolia during the progress of insulation work. The samples were analyzed by phase contrast microscopy (PCM) and transmission electron microscopy (TEM). Results: The average phase contrast microscopy equivalent (PCME) asbestos fiber concentration was 0.93 f/cm3. Sixteen of the 41 personal and one of the area samples exceeded the United States Occupational Safety and Health Administration (US OSHA) short-term exposure limit of 1.0 f/cm3. If it is assumed that the short-term samples collected are representative of full-shift exposure, then the exposures are approximately 10 times higher than the US OSHA 8-hour permissible exposure limit of 0.1 f/cm3. Conclusion: Power plant insulation workers are exposed to airborne asbestos at concentrations that exceed the US OSHA Permissible Exposure Limit. Action to mitigate the risks should be taken in Mongolia. PMID:25730489

  13. Exposure to airborne asbestos in thermal power plants in Mongolia.

    PubMed

    Damiran, Naransukh; Silbergeld, Ellen K; Frank, Arthur L; Lkhasuren, Oyuntogos; Ochir, Chimedsuren; Breysse, Patrick N

    2015-01-01

    Coal-fired thermal power plants (TPPs) in Mongolia use various types of asbestos-containing materials (ACMs) in thermal insulation of piping systems, furnaces, and other products. To investigate the occupational exposure of insulation workers to airborne asbestos in Mongolian power plants. Forty-seven air samples were collected from four power plants in Mongolia during the progress of insulation work. The samples were analyzed by phase contrast microscopy (PCM) and transmission electron microscopy (TEM). The average phase contrast microscopy equivalent (PCME) asbestos fiber concentration was 0·93 f/cm(3). Sixteen of the 41 personal and one of the area samples exceeded the United States Occupational Safety and Health Administration (US OSHA) short-term exposure limit of 1·0 f/cm(3). If it is assumed that the short-term samples collected are representative of full-shift exposure, then the exposures are approximately 10 times higher than the US OSHA 8-hour permissible exposure limit of 0·1 f/cm(3). Power plant insulation workers are exposed to airborne asbestos at concentrations that exceed the US OSHA Permissible Exposure Limit. Action to mitigate the risks should be taken in Mongolia.

  14. Bacterial communities in the gut and reproductive organs of Bactrocera minax (Diptera: Tephritidae) based on 454 pyrosequencing.

    PubMed

    Wang, Ailin; Yao, Zhichao; Zheng, Weiwei; Zhang, Hongyu

    2014-01-01

    The citrus fruit fly Bactrocera minax is associated with diverse bacterial communities. We used a 454 pyrosequencing technology to study in depth the microbial communities associated with gut and reproductive organs of Bactrocera minax. Our dataset consisted of 100,749 reads with an average length of 400 bp. The saturated rarefaction curves and species richness indices indicate that the sampling was comprehensive. We found highly diverse bacterial communities, with individual sample containing approximately 361 microbial operational taxonomic units (OTUs). A total of 17 bacterial phyla were obtained from the flies. A phylogenetic analysis of 16S rDNA revealed that Proteobacteria was dominant in all samples (75%-95%). Actinobacteria and Firmicutes were also commonly found in the total clones. Klebsiella, Citrobacter, Enterobacter, and Serratia were the major genera. However, bacterial diversity (Chao1, Shannon and Simpson indices) and community structure (PCA analysis) varied across samples. Female ovary has the most diverse bacteria, followed by male testis, and the bacteria diversity of reproductive organs is richer than that of the gut. The observed variation can be caused by sex and tissue, possibly to meet the host's physiological demands.

  15. Baffin Bay Ice Drift and Export: 2002-2007

    NASA Technical Reports Server (NTRS)

    Kwok, Ron

    2007-01-01

    Multiyear estimates of sea ice drift in Baffin Bay and Davis Strait are derived for the first time from the 89 GHz channel of the AMSR-E instrument. Uncertainties in the drift estimates, assessed with Envisat ice motion, are approximately 2-3 km/day. A persistent atmospheric trough, between the coast of Greenland and Baffin Island, drives the prevailing southward drift pattern with average daily displacements in excess of 18-20 km during winter. Over the 5-year record, the ice export ranges between 360 and 675 x 10(exp 3) km(exp 2), with an average of 530 x 10(exp 3) km(exp 2). Sea ice area inflow from the Nares Strait, Lancaster Sound and Jones Sound potentially contribute up to a third of the net area outflow while ice production at the North Water Polynya contributes the balance. Rough estimates of annual volume export give approximately 500-800 km(exp 3). Comparatively, these are approximately 70% and approximately 30% of the annual area and Strait.

  16. A continuous tensor field approximation of discrete DT-MRI data for extracting microstructural and architectural features of tissue.

    PubMed

    Pajevic, Sinisa; Aldroubi, Akram; Basser, Peter J

    2002-01-01

    The effective diffusion tensor of water, D, measured by diffusion tensor MRI (DT-MRI), is inherently a discrete, noisy, voxel-averaged sample of an underlying macroscopic effective diffusion tensor field, D(x). Within fibrous tissues this field is presumed to be continuous and smooth at a gross anatomical length scale. Here a new, general mathematical framework is proposed that uses measured DT-MRI data to produce a continuous approximation to D(x). One essential finding is that the continuous tensor field representation can be constructed by repeatedly performing one-dimensional B-spline transforms of the DT-MRI data. The fidelity and noise-immunity of this approximation are tested using a set of synthetically generated tensor fields to which background noise is added via Monte Carlo methods. Generally, these tensor field templates are reproduced faithfully except at boundaries where diffusion properties change discontinuously or where the tensor field is not microscopically homogeneous. Away from such regions, the tensor field approximation does not introduce bias in useful DT-MRI parameters, such as Trace(D(x)). It also facilitates the calculation of several new parameters, particularly differential quantities obtained from the tensor of spatial gradients of D(x). As an example, we show that they can identify tissue boundaries across which diffusion properties change rapidly using in vivo human brain data. One important application of this methodology is to improve the reliability and robustness of DT-MRI fiber tractography.

  17. Atmospheric CO2 Records from Sites in the Atmospheric Environment Service Air Sampling Network (1975 and 1994)

    DOE Data Explorer

    Trivett, N. B.A. [Atmospheric Environment Service, Downsview, Ontario, Canada; Hudec, V. C. [Atmospheric Environment Service, Downsview, Ontario, Canada; Wong, C. S. [Marine Carbon Research Centre, Institute of Ocean Sciences, Sidney, British Columbia, Canada

    1997-01-01

    From the mid-1970s through the mid-1990s, air samples were collected for the purposes of monitoring atmospheric CO2 from four sites in the AES air sampling network. Air samples were collected approximately once per week, between 12:00 and 16:00 local time, in a pair of evacuated 2-L thick-wall borosilicate glass flasks. Samples were collected under preferred conditions of wind speed and direction (i.e., upwind of the main station and when winds are strong and steady). The flasks were evacuated to pressures of ~1 × 10-4 mbar or 0.01 Pa prior to being sent to the stations. The airwas not dried during sample collection. The flask data from Alert show an increase in the annual atmospheric CO2 concentration from 341.35 parts per million by volume (ppmv) in 1981 to 357.21 ppmv in 1991. For Cape St. James, Trivett and Higuchi (1989) reported that the mean annual rate of increase, obtained from the slope of a least-squares regression line through the annual averages, was 1.43 ppmv per year. In August 1992, the weather station at Cape St. James was automated; as a result, the flask sampling program was discontinued at this site. Estevan Point, on the West Coast of Vancouver Island, was chosen as a replacement station. Sampling at Estevan Point started in 1992; thus, the monthly and annual CO2record from Estevan Point is too short to show any long-term trends. The sampling site at Sable Island, off the coast of Nova Scotia, was established in 1975. The flask data from Sable Island show an increase in the annual atmospheric CO2 concentration from 334.49 parts per million by volume (ppmv) in 1977 (the first full year of data) to 356.02 ppmv in 1990. For Sable Island, Trivett and Higuchi (1989) reported that the mean annual rate of increase, obtained from the slope of a least-squares regression line through the annual averages, was 1.48 ppmv per year.

  18. First results on zooplankton community composition and contamination by some persistent organic pollutants in the Gulf of Tadjoura (Djibouti).

    PubMed

    Boldrocchi, G; Moussa Omar, Y; Rowat, D; Bettinetti, R

    2018-06-15

    The Gulf of Tadjoura is located in the Horn of Africa and is widely recognized as an important site where the zooplanktivorous whale sharks seasonally aggregate from October to February. The surface zooplankton community (0-3m) was weekly sampled from November 2016 to February 2017 in two sites during the whale shark aggregation period. A total of 12 phyla were identified. Copepoda represented the most abundant and diverse group with 29 different genera, and contributed with an average of 82% of the mean zooplankton density of approximately 6600indm -3 . During the sampling period, copepods were dominated numerically by Calanoida (3600indm -3 ), followed by Poicilostomatatoida (1300indm -3 ). Within the copepods, Paracalanidae, Calanidae, Oncaeidae and Miraciidae were the most common families. The temporal trend in zooplankton biomass at both stations revealed the highest peak in December (41.3±36.4mgm -3 ), and the lowest in February (6.6±3.3mgm -3 ). As no information is available on the occurrence of legacy contaminants use and release in this area, analysis revealed the consistent presence of both DDT and PCB residues in zooplankton samples in the Gulf of Tadjoura. Total PCB ranged from approximately 110 to 637ngg -1 d.w., while total DDT from 21 to 80ngg -1 d.w. The proportion of primary DDT in the total residue was higher than DDE and DDD, which strongly suggests that the area might actually be subjected to DDT inputs of the parent compound. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Trace element and major ion composition of wet and dry depositon in Ankara, Turkey

    NASA Astrophysics Data System (ADS)

    Kaya, Güven; Tuncel, Gürdal

    Daily, wet-only precipitation samples collected over a two year period were analyzed for SO 42-, NO 3-, Cl -, NH 4+, H +, Ca, Mg, K, Na, Al, Cu, Cd, Cr, Zn, V and Ni. Weekly dry-deposition samples collected on petri-dishes over the same period were analyzed only for major ions. Concentrations of ions and elements in Ankara precipitation are comparable with concentrations reported in literature for other urban areas. However, the wet deposition fluxes are the lowest among literature values, owing to small annual precipitation in the region. Although, annual average pH in precipitation is 4.7, episodic rain events with fairly low pH's were observed. Approximately half of the acidity in Ankara precipitation is neutralized in the winter season, while the acidity is completely neutralized by airborne soil particles that are rich in CaCO 3 in the summer precipitation. The SO 42- and NO 3- contributes approximately equally on the free acidity in winter. Main forms of SO 42- and NO 3- in precipitation are CaSO 4 and Ca(NO 3) 2, respectively. Crustal elements and ions have higher concentrations during summer season, while anthropogenic ions and elements did not show well-defined seasonal cycles. The lack of industrial activity in Ankara has profound influence on the temporal behavior of elements and ions.

  20. Properties of star clusters - I. Automatic distance and extinction estimates

    NASA Astrophysics Data System (ADS)

    Buckner, Anne S. M.; Froebrich, Dirk

    2013-12-01

    Determining star cluster distances is essential to analyse their properties and distribution in the Galaxy. In particular, it is desirable to have a reliable, purely photometric distance estimation method for large samples of newly discovered cluster candidates e.g. from the Two Micron All Sky Survey, the UK Infrared Deep Sky Survey Galactic Plane Survey and VVV. Here, we establish an automatic method to estimate distances and reddening from near-infrared photometry alone, without the use of isochrone fitting. We employ a decontamination procedure of JHK photometry to determine the density of stars foreground to clusters and a galactic model to estimate distances. We then calibrate the method using clusters with known properties. This allows us to establish distance estimates with better than 40 per cent accuracy. We apply our method to determine the extinction and distance values to 378 known open clusters and 397 cluster candidates from the list of Froebrich, Scholz & Raftery. We find that the sample is biased towards clusters of a distance of approximately 3 kpc, with typical distances between 2 and 6 kpc. Using the cluster distances and extinction values, we investigate how the average extinction per kiloparsec distance changes as a function of the Galactic longitude. We find a systematic dependence that can be approximated by AH(l) [mag kpc-1] = 0.10 + 0.001 × |l - 180°|/° for regions more than 60° from the Galactic Centre.

  1. Psychometric support of the school climate measure in a large, diverse sample of adolescents: a replication and extension.

    PubMed

    Zullig, Keith J; Collins, Rani; Ghani, Nadia; Patton, Jon M; Scott Huebner, E; Ajamie, Jean

    2014-02-01

    The School Climate Measure (SCM) was developed and validated in 2010 in response to a dearth of psychometrically sound school climate instruments. This study sought to further validate the SCM on a large, diverse sample of Arizona public school adolescents (N = 20,953). Four SCM domains (positive student-teacher relationships, academic support, order and discipline, and physical environment) were available for the analysis. Confirmatory factor analysis and structural equation modeling were established to construct validity, and criterion-related validity was assessed via selected Youth Risk Behavior Survey (YRBS) school safety items and self-reported grade (GPA) point average. Analyses confirmed the 4 SCM school climate domains explained approximately 63% of the variance (factor loading range .45-.92). Structural equation models fit the data well χ(2) = 14,325 (df = 293, p < .001), comparative fit index (CFI) = .951, Tuker-Lewis index (TLI) = .952, root mean square error of approximation (RMSEA) = .05). The goodness-of-fit index was .940. Coefficient alphas ranged from .82 to .93. Analyses of variance with post hoc comparisons suggested the SCM domains related in hypothesized directions with the school safety items and GPA. Additional evidence supports the validity and reliability of the SCM. Measures, such as the SCM, can facilitate data-driven decisions and may be incorporated into evidenced-based processes designed to improve student outcomes. © 2014, American School Health Association.

  2. Life cycle cost analysis of aging aircraft airframe maintenance

    NASA Astrophysics Data System (ADS)

    Sperry, Kenneth Robert

    Scope and method of study. The purpose of this study was to examine the relationship between an aircraft's age and its annual airframe maintenance costs. Common life cycle costing methodology has previously not recognized the existence of this cost growth potential, and has therefor not determined the magnitude nor significance of this cost element. This study analyzed twenty-five years of DOT Form 41-airframe maintenance cost data for the Boeing 727, 737, 747 and McDonnell Douglas DC9 and DC-10 aircraft. Statistical analysis included regression analysis, Pearson's r, and t-tests to test the null hypothesis. Findings and conclusion. Airframe maintenance cost growth was confirmed to be increasing after an aircraft's age exceeded its designed service objective of approximately twenty-years. Annual airframe maintenance cost growth increases were measured ranging from 3.5% annually for a DC-9, to approximately 9% annually for a DC-10 aircraft. Average measured coefficient of determination between age and airframe maintenance, exceeded .80, confirming a strong relationship between cost: and age. The statistical significance of the difference between airframe costs sampled in 1985, compared to airframe costs sampled in 1998 was confirmed by t-tests performed on each subject aircraft group. Future cost forecasts involving aging aircraft subjects must address cost growth due to aging when attempting to model an aircraft's economic service life.

  3. An assessment of nutrients and sedimentation in the St. Thomas East End Reserves, US Virgin Islands.

    PubMed

    Pait, Anthony S; Galdo, Francis R; Ian Hartwell, S; Apeti, Dennis A; Mason, Andrew L

    2018-04-09

    Nutrients and sedimentation were monitored for approximately 2 years at six sites in the St. Thomas East End Reserves (STEER), St. Thomas, USVI, as part of a NOAA project to develop an integrated environmental assessment. Concentrations of ammonium (NH 4 + ) and dissolved inorganic nitrogen (DIN) were higher in Mangrove Lagoon and Benner Bay in the western portion of STEER than in the other sites further east (i.e., Cowpet Bay, Rotto Cay, St. James, and Little St. James). There was no correlation between rainfall and nutrient concentrations. Using a set of suggested nutrient thresholds that have been developed to indicate the potential for the overgrowth of algae on reefs, approximately 60% of the samples collected in STEER were above the threshold for orthophosphate (HPO 4 = ), while 55% of samples were above the DIN threshold. Benner Bay had the highest sedimentation rate of any site monitored in STEER, including Mangrove Lagoon. There was also an east to west and a north to south gradient in sedimentation, indicative of higher sedimentation rates in the western, more populated areas surrounding STEER, and sites closer to the shore of the main island of St. Thomas. Although none of the sites had a mean or average sedimentation rate above a suggested sedimentation threshold, the mean sedimentation rate in Benner Bay was just below the threshold.

  4. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, John; Chorin, Alexandre J.; Crutchfield, William

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  5. Preliminary Evidence for an Emerging Nonmetropolitan Mortality Penalty in the United States

    PubMed Central

    Cosby, Arthur G.; Neaves, Tonya T.; Cossman, Ronald E.; Cossman, Jeralynn S.; James, Wesley L.; Feierabend, Neal; Mirvis, David M.; Jones, Carol A.; Farrigan, Tracey

    2008-01-01

    We discovered an emerging non-metropolitan mortality penalty by contrasting 37 years of age-adjusted mortality rates for metropolitan versus nonmetropolitan US counties. During the 1980s, annual metropolitan–nonmetropolitan differences averaged 6.2 excess deaths per 100000 nonmetropolitan population, or approximately 3600 excess deaths; however, by 2000 to 2004, the difference had increased more than 10 times to average 71.7 excess deaths, or approximately 35 000 excess deaths. We recommend that research be undertaken to evaluate and utilize our preliminary findings of an emerging US nonmetropolitan mortality penalty. PMID:18556611

  6. Cost-effective binomial sequential sampling of western bean cutworm, Striacosta albicosta (Lepidoptera: Noctuidae), egg masses in corn.

    PubMed

    Paula-Moraes, S; Burkness, E C; Hunt, T E; Wright, R J; Hein, G L; Hutchison, W D

    2011-12-01

    Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native pest of dry beans (Phaseolus vulgaris L.) and corn (Zea mays L.). As a result of larval feeding damage on corn ears, S. albicosta has a narrow treatment window; thus, early detection of the pest in the field is essential, and egg mass sampling has become a popular monitoring tool. Three action thresholds for field and sweet corn currently are used by crop consultants, including 4% of plants infested with egg masses on sweet corn in the silking-tasseling stage, 8% of plants infested with egg masses on field corn with approximately 95% tasseled, and 20% of plants infested with egg masses on field corn during mid-milk-stage corn. The current monitoring recommendation is to sample 20 plants at each of five locations per field (100 plants total). In an effort to develop a more cost-effective sampling plan for S. albicosta egg masses, several alternative binomial sampling plans were developed using Wald's sequential probability ratio test, and validated using Resampling for Validation of Sampling Plans (RVSP) software. The benefit-cost ratio also was calculated and used to determine the final selection of sampling plans. Based on final sampling plans selected for each action threshold, the average sample number required to reach a treat or no-treat decision ranged from 38 to 41 plants per field. This represents a significant savings in sampling cost over the current recommendation of 100 plants.

  7. National Economic Development Procedures Manual. Coastal Storm Damage and Erosion

    DTIC Science & Technology

    1991-09-01

    study area is temperate with warm summers and moderate winters. The annual temperature averages approximately 53 degrees Fahrenheit (*F). On average ...January is the coolest month with a mean temperature of 32°F and July is the warmest month. The average annual precipitation is about 45 inches with...0704.0188 Public rooing burden for rhr$ LoIlecton of ,nformaton .s estma eO to average I hour oer resiorse including the time for resrewing inttuctiOn

  8. Wavelength Dependence of Solar Irradiance Enhancement During X-class Flares and Its Influence on the Upper Atmosphere

    NASA Technical Reports Server (NTRS)

    Huang, Yanshi; Richmond, A. D.

    2013-01-01

    The wavelength dependence of solar irradiance enhancement during flare events is one of the important factors in determining how the Thermosphere-Ionosphere (TI) system responds to flares. To investigate the wavelength dependence of flare enhancement, the Flare Irradiance Spectral Model (FISM) was run for 61X-class flares. The absolute and the percentage increases of solar irradiance at flare peaks, compared to pre-flare conditions, have clear wavelength dependences. The 0-4 nm irradiance increases much more ((is) approximately 680 on average) than that in the 14-25 nm waveband ((is) approximately 65 on average), except at 24 nm ( (is) approximately 220). The average percentage increases for the 25-105 nm and 122-190 nm wave bands are approximately 120 and approximately 35, respectively. The influence of 6 different wavebands (0-14 nm, 14-25 nm, 25-105 nm, 105-120 nm, 121.56 nm,and122-175 nm) on the thermosphere was examined for the October 28th, 2003 flare (X17-class) event by coupling FISM with the National Center for Atmospheric Research (NCAR) Thermosphere-Ionosphere-Electrodynamics General Circulation Model(TIE-GCM) under geomagnetically quiet conditions (Kp=1). While the enhancement in the0-14nm waveband caused the largest enhancement of the globally integrated solar heating, the impact of solar irradiance enhancement on the thermosphere at 400 km is largest for the 25-105 nm waveband (EUV), which accounts for about 33 K of the total 45 K temperature enhancement, and approximately 7.4% of the total approximately 11.5% neutral density enhancement. The effect of 122-175 nm flare radiation on the thermosphere is rather small. The study also illustrates that the high-altitude thermospheric response to the flare radiation at 0-175 nm is almost a linear combination of the responses to the individual wavebands. The upper thermospheric temperature and density enhancements peaked 3-5 h after the maximum flare radiation.

  9. Tank Vapor Characterization Project: Tank 241-S-102 fourth temporal study: Headspace gas and vapor characterization results from samples collected on December 19, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pool, K.H.; Evans, J.C.; Olsen, K.B.

    1997-08-01

    This report presents the results from analyses of samples taken from the headspace of waste storage tank 241-S-102 (Tank S-102) at the Hanford Site in Washington State. Tank headspace samples collected by SGN Eurisys Service Corporation (SESC) were analyzed by Pacific Northwest National Laboratory (PNNL) to determine headspace concentrations of selected non-radioactive analytes. Analyses were performed by the Vapor Analytical Laboratory (VAL) at PNNL. Vapor concentrations from sorbent trap samples are based on measured sample volumes provided by SESC. Ammonia was determined to be above the immediate notification limit of 150 ppm as specified by the sampling and analysis planmore » (SAP). Hydrogen was the principal flammable constituent of the Tank S-102 headspace, determined to be present at approximately 2.410% of its lower flammability limit (LFL). Total headspace flammability was estimated to be <2.973% of its lower flammability limit (LFL). Total headspace flammability was estimated to be <2.973% of the LFL. Average measured concentrations of targeted gases, inorganic vapors, and selected organic vapors are provided in Table S.1. A summary of experimental methods, including sampling methodology, analytical procedures, and quality assurance and control methods are presented in Section 2.0. Detailed descriptions of the analytical results are provided in Section 3.0.« less

  10. Conservative-variable average states for equilibrium gas multi-dimensional fluxes

    NASA Technical Reports Server (NTRS)

    Iannelli, G. S.

    1992-01-01

    Modern split component evaluations of the flux vector Jacobians are thoroughly analyzed for equilibrium-gas average-state determinations. It is shown that all such derivations satisfy a fundamental eigenvalue consistency theorem. A conservative-variable average state is then developed for arbitrary equilibrium-gas equations of state and curvilinear-coordinate fluxes. Original expressions for eigenvalues, sound speed, Mach number, and eigenvectors are then determined for a general average Jacobian, and it is shown that the average eigenvalues, Mach number, and eigenvectors may not coincide with their classical pointwise counterparts. A general equilibrium-gas equation of state is then discussed for conservative-variable computational fluid dynamics (CFD) Euler formulations. The associated derivations lead to unique compatibility relations that constrain the pressure Jacobian derivatives. Thereafter, alternative forms for the pressure variation and average sound speed are developed in terms of two average pressure Jacobian derivatives. Significantly, no additional degree of freedom exists in the determination of these two average partial derivatives of pressure. Therefore, they are simultaneously computed exactly without any auxiliary relation, hence without any geometric solution projection or arbitrary scale factors. Several alternative formulations are then compared and key differences highlighted with emphasis on the determination of the pressure variation and average sound speed. The relevant underlying assumptions are identified, including some subtle approximations that are inherently employed in published average-state procedures. Finally, a representative test case is discussed for which an intrinsically exact average state is determined. This exact state is then compared with the predictions of recent methods, and their inherent approximations are appropriately quantified.

  11. On direct theorems for best polynomial approximation

    NASA Astrophysics Data System (ADS)

    Auad, A. A.; AbdulJabbar, R. S.

    2018-05-01

    This paper is to obtain similarity for the best approximation degree of functions, which are unbounded in L p,α (A = [0,1]), which called weighted space by algebraic polynomials. {E}nH{(f)}p,α and the best approximation degree in the same space on the interval [0,2π] by trigonometric polynomials {E}nT{(f)}p,α of direct wellknown theorems in forms the average modules.

  12. Reconstructing lake ice cover in subarctic lakes using a diatom-based inference model

    NASA Astrophysics Data System (ADS)

    Weckström, Jan; Hanhijärvi, Sami; Forsström, Laura; Kuusisto, Esko; Korhola, Atte

    2014-03-01

    A new quantitative diatom-based lake ice cover inference model was developed to reconstruct past ice cover histories and applied to four subarctic lakes. The used ice cover model is based on a calculated melting degree day value of +130 and a freezing degree day value of -30 for each lake. The reconstructed Holocene ice cover duration histories show similar trends to the independently reconstructed regional air temperature history. The ice cover duration was around 7 days shorter than the average ice cover duration during the warmer early Holocene (approximately 10 to 6.5 calibrated kyr B.P.) and around 3-5 days longer during the cool Little Ice Age (approximately 500 to 100 calibrated yr B.P.). Although the recent climate warming is represented by only 2-3 samples in the sediment series, these show a rising trend in the prolonged ice-free periods of up to 2 days. Diatom-based ice cover inference models can provide a powerful tool to reconstruct past ice cover histories in remote and sensitive areas where no measured data are available.

  13. Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang

    2016-11-01

    A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.

  14. Pharmacokinetics of isotretinoin and its major blood metabolite following a single oral dose to man.

    PubMed

    Colburn, W A; Vane, F M; Shorter, H J

    1983-01-01

    A pharmacokinetic profile of isotretinoin and its major dermatologically active blood metabolite, 4-oxo-isotretinoin, was developed following a single 80 mg oral suspension dose of isotretinoin to 15 normal male subjects. Blood samples were assayed for isotretinoin and 4-oxo-isotretinoin using a newly developed reverse-phase HPLC method. Following rapid absorption from the suspension formulation, isotretinoin is distributed and eliminated with harmonic mean half-lives of 1.3 and 17.4 h, respectively. Maximum concentrations of isotretinoin in blood were observed at 1 to 4 h after dosing. Maximum concentrations of the major blood metabolite of isotretinoin, 4-oxo-isotretinoin, are approximately one-half those of isotretinoin and occur at 6 to 16 h after isotretinoin dosing. The ratio of areas under the curve for metabolite and parent drug following the single dose suggests that average steady-state ratios of metabolite to parent drug during a dosing interval will be approximately 2.5. Both isotretinoin and its metabolite can be adequately described using a single linear pharmacokinetic model.

  15. Variations in the concentration of plutonium, strontium-90 and total alpha-emitters in human teeth collected within the British Isles.

    PubMed

    O'Donnell, R G; Mitchell, P I; Priest, N D; Strange, L; Fox, A; Henshaw, D L; Long, S C

    1997-08-18

    Concentrations of plutonium-239, plutonium-240, strontium-90 and total alpha-emitters have been measured in children's teeth collected throughout Great Britain and Ireland. The concentrations of plutonium and strontium-90 were measured in batched samples, each containing approximately 50 teeth, using low-background radiochemical methods. The concentrations of total alpha-emitters were determined in single teeth using alpha-sensitive plastic track detectors. The results showed that the average concentrations of total alpha-emitters and strontium-90 were approximately one to three orders of magnitude greater than the equivalent concentrations of plutonium-239,240. Regression analyses indicated that the concentrations of plutonium, but not strontium-90 or total alpha-emitters, decreased with increasing distance from the Sellafield nuclear fuel reprocessing plant-suggesting that this plant is a source of plutonium contamination in the wider population of the British Isles. Nevertheless, the measured absolute concentrations of plutonium (mean = 5 +/- 4 mBq kg-1 ash wt.) were so low that they are considered to present an insignificant radiological hazard.

  16. Targeted Analyte Detection by Standard Addition Improves Detection Limits in MALDI Mass Spectrometry

    PubMed Central

    Eshghi, Shadi Toghi; Li, Xingde; Zhang, Hui

    2014-01-01

    Matrix-assisted laser desorption/ionization has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications. PMID:22877355

  17. Targeted analyte detection by standard addition improves detection limits in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Toghi Eshghi, Shadi; Li, Xingde; Zhang, Hui

    2012-09-18

    Matrix-assisted laser desorption/ionization (MALDI) has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications.

  18. PVP capped silver nanocubes assisted removal of glyphosate from water-A photoluminescence study.

    PubMed

    Sarkar, Sumit; Das, Ratan

    2017-10-05

    Glyphosate [N-phosphono-methylglycine (PMG)] is the most used herbicide worldwide and it has been reported very recently that Glyphosate is very harmful and can produce lots of diseases such as alzheimer and parkinson's disease, depression, cancer, infertility including genotoxic effects. As it is mostly present in stable water body and ground water system, its detection and removal is very important. Here, we have shown a fluorescence technique for the removal of glyphosate from water using chemically synthesized polyvinylpyrrolidone (PVP) silver nanocrystals. Transmission Electron Microscopy (TEM) study shows the average size of silver nanocrystals of 100nm approximately with a morphology of cubic shape. Glyphosate does not show absorption in the visible region. But both glyphosate and silver nanocrystals show strong fluorescence in the visible region. So, photoluminescence study has been successfully utilized to detect the glyphosate in water samples and on treating the glyphosate contaminated water sample with silver nanocrystals, the sample shows no emission peak of glyphosate at 458nm. Thus, this approach is a promising and very rapid method for the detection and removal of glyphosate from water samples on treatment with silver nanocubes. NMR spectra further confirms that the silver nanocrystals treated contaminated water samples are glyphosate free. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Electron Microscopic Examination of Irradiated TRISO Coated Particles of Compact 6-3-2 of AGR-1 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori

    2012-12-01

    The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compact 6-3-2more » has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less

  20. Electron Microscopic Examination of Irradiated TRISO Coated Particles of Compact 6-3-2 of AGR-1 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori

    2012-12-01

    The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO 2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compactmore » 6-3-2 has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less

  1. Development of the black soldier fly (Diptera: Stratiomyidae) in relation to temperature.

    PubMed

    Tomberlin, Jeffery K; Adler, Peter H; Myers, Heidi M

    2009-06-01

    The black soldier fly, Hermetia illucens L., was reared on a grain-based diet at 27, 30, and 36 degrees C. Survival of 4- to 6-d-old larvae to adults averaged 74-97% at 27 and 30 degrees C but was only 0.1% at 36 degrees C. Flies required a mean of approximately 4 d (11%) longer to complete larval and pupal development at 27 degrees C than at 30 degrees C. At 27 and 30 degrees C, females weighed an average of 17-19% more than males but required an average of 0.6-0.8 d (3.0-4.3%) longer to complete larval development. At both temperatures, adult females lived an average of approximately 3.5 d less than adult males. The duration of larval development was a significant predictor of adult longevity. Temperature differences of even 3 degrees C produce significant fitness tradeoffs for males and females, influencing life history attributes and having practical applications for forensic entomology.

  2. When can time-dependent currents be reproduced by the Landauer steady-state approximation?

    NASA Astrophysics Data System (ADS)

    Carey, Rachel; Chen, Liping; Gu, Bing; Franco, Ignacio

    2017-05-01

    We establish well-defined limits in which the time-dependent electronic currents across a molecular junction subject to a fluctuating environment can be quantitatively captured via the Landauer steady-state approximation. For this, we calculate the exact time-dependent non-equilibrium Green's function (TD-NEGF) current along a model two-site molecular junction, in which the site energies are subject to correlated noise, and contrast it with that obtained from the Landauer approach. The ability of the steady-state approximation to capture the TD-NEGF behavior at each instant of time is quantified via the same-time correlation function of the currents obtained from the two methods, while their global agreement is quantified by examining differences in the average currents. The Landauer steady-state approach is found to be a useful approximation when (i) the fluctuations do not disrupt the degree of delocalization of the molecular eigenstates responsible for transport and (ii) the characteristic time for charge exchange between the molecule and leads is fast with respect to the molecular correlation time. For resonant transport, when these conditions are satisfied, the Landauer approach is found to accurately describe the current, both on average and at each instant of time. For non-resonant transport, we find that while the steady-state approach fails to capture the time-dependent transport at each instant of time, it still provides a good approximation to the average currents. These criteria can be employed to adopt effective modeling strategies for transport through molecular junctions in interaction with a fluctuating environment, as is necessary to describe experiments.

  3. Energy expenditure, heart rate response, and metabolic equivalents (METs) of adults taking part in children's games.

    PubMed

    Fischer, S L; Watts, P B; Jensen, R L; Nelson, J

    2004-12-01

    The needs of physical activity can be seen through the lack of numbers participating in regular physical activity as well as the increase in prevalence of certain diseases such as Type II diabetes (especially in children), cardiovascular diseases, and some cancers. With the increase in preventable diseases that are caused in part by a sedentary lifestyle, a closer look needs to be taken into the role of family interaction as a means of increasing physical activity for both adults and children. Because of the many benefits of physical activity in relation to health, a family approach to achieving recommended levels of physical activity may be quite applicable. Forty volunteers were recruited from the community (20 subjects and 20 children). The volunteers played 2 games: soccer and nerfball. Data was collected over 10 minutes (5 min per game). Expired air analysis was used to calculate energy expenditure and metabolic equivalents (METs). Descriptive statistics were calculated along with a regression analysis to determine differences between the 2 games, and an ACOVA to determine any significant effects of age, child age, gender, and physical activity level on the results. For both games, average heart rate measured approximately 88%max; average METs measured approximately 6, average energy expenditure measured approximately 40 kcal. S: This study showed that adults can achieve recommended physical activity levels through these specific activities if sustained for approximately 20 min.

  4. What limits photosynthetic energy conversion efficiency in nature? Lessons from the oceans.

    PubMed

    Falkowski, Paul G; Lin, Hanzhi; Gorbunov, Maxim Y

    2017-09-26

    Constraining photosynthetic energy conversion efficiency in nature is challenging. In principle, two yield measurements must be made simultaneously: photochemistry, fluorescence and/or thermal dissipation. We constructed two different, extremely sensitive and precise active fluorometers: one measures the quantum yield of photochemistry from changes in variable fluorescence, the other measures fluorescence lifetimes in the picosecond time domain. By deploying the pair of instruments on eight transoceanic cruises over six years, we obtained over 200 000 measurements of fluorescence yields and lifetimes from surface waters in five ocean basins. Our results revealed that the average quantum yield of photochemistry was approximately 0.35 while the average quantum yield of fluorescence was approximately 0.07. Thus, closure on the energy budget suggests that, on average, approximately 58% of the photons absorbed by phytoplankton in the world oceans are dissipated as heat. This extraordinary inefficiency is associated with the paucity of nutrients in the upper ocean, especially dissolved inorganic nitrogen and iron. Our results strongly suggest that, in nature, most of the time, most of the phytoplankton community operates at approximately half of its maximal photosynthetic energy conversion efficiency because nutrients limit the synthesis or function of essential components in the photosynthetic apparatus.This article is part of the themed issue 'Enhancing photosynthesis in crop plants: targets for improvement'. © 2017 The Author(s).

  5. Radon daughter plate-out measurements at SNOLAB for polyethylene and copper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Matthew; Bauer, Dan; Bunker, Ray

    We report that polyethylene and copper samples were exposed to the underground air at SNOLAB for approximately three months while several environmental factors were monitored. Predictions of the radon-daughter plate-out rate are compared to the resulting surface activities, obtained from high-sensitivity measurements of alpha emissivity using the XIA UltraLo-1800 spectrometer at Southern Methodist University. From these measurements, we determine an average 210Pb plate-out rate of 249 and 423 atoms/day/cm 2 for polyethylene and copper, respectively, when exposed to radon activity concentration of 135 Bq/m 3 at SNOLAB. Finally, a time-dependent model of alpha activity is discussed for these materials placedmore » in similar environmental conditions.« less

  6. Radon daughter plate-out measurements at SNOLAB for polyethylene and copper

    DOE PAGES

    Stein, Matthew; Bauer, Dan; Bunker, Ray; ...

    2017-11-04

    We report that polyethylene and copper samples were exposed to the underground air at SNOLAB for approximately three months while several environmental factors were monitored. Predictions of the radon-daughter plate-out rate are compared to the resulting surface activities, obtained from high-sensitivity measurements of alpha emissivity using the XIA UltraLo-1800 spectrometer at Southern Methodist University. From these measurements, we determine an average 210Pb plate-out rate of 249 and 423 atoms/day/cm 2 for polyethylene and copper, respectively, when exposed to radon activity concentration of 135 Bq/m 3 at SNOLAB. Finally, a time-dependent model of alpha activity is discussed for these materials placedmore » in similar environmental conditions.« less

  7. Preparation and antibacterial activity of oligosaccharides derived from dandelion.

    PubMed

    Qian, Li; Zhou, Yan; Teng, Zhaolin; Du, Chun-Ling; Tian, Changrong

    2014-03-01

    In this study, we prepared oligosaccharides from dandelion (Taraxacum officinale) by hydrolysis with hydrogen peroxide (H2O2) and investigated their antibacterial activity. The optimum hydrolysis conditions, as determined using the response surface methodology, were as follows: reaction time, 5.12h; reaction temperature, 65.53 °C and H2O2 concentration, 3.16%. Under these conditions, the maximum yield of the oligosaccharides reached 25.43%. The sugar content in the sample was 96.8%, and the average degree of polymerisation was approximately 9. The oligosaccharides showed high antibacterial activity against Escherichia coli, Bacillus subtilis and Staphylococcus aureus, indicating that dandelion-derived oligosaccharides have the potential to be used as antibacterial agents. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Mn valence, magnetic, and electrical properties of LaMnO3+δ nanofibers by electrospinning.

    PubMed

    Zhou, Xianfeng; Xue, Jiang; Zhou, Defeng; Wang, Zhongli; Bai, Yijia; Wu, Xiaojie; Liu, Xiaojuan; Meng, Jian

    2010-10-01

    LaMnO3+δ nanofibers have been prepared by electrospinning. The nearly 70% of Mn atoms is Mn4+, which is much higher than that in the nanoparticles. The average grain size of our fibers is approximately 20 nm, which is the critical size producing the nanoscale effect. The nanofibers exhibit a very broad magnetic transition with Tc≈255 K, and the Tc onset is around 310 K. The blocking temperature TB is 180 K. The sample shows weak ferromagnetic property above the TB and below Tc and superparamagnetic property near the Tc onset. The resistivity measurements show a metal-insulator transition near 210 K and an upturn at about 45 K.

  9. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting

    PubMed Central

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2015-01-01

    Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982

  10. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  11. Emotional intelligence and clinical performance/retention of nursing students

    PubMed Central

    Marvos, Chelsea; Hale, Frankie B.

    2015-01-01

    Objective: This exploratory, quantitative, descriptive study was undertaken to explore the relationship between clinical performance and anticipated retention in nursing students. Methods: After approval by the university's Human Subjects Committee, a sample of 104 nursing students were recruited for this study, which involved testing with a valid and reliable emotional intelligence (EI) instrument and a self-report survey of clinical competencies. Results: Statistical analysis revealed that although the group average for total EI score and the 6 score subsets were in the average range, approximately 30% of the individual total EI scores and 30% of two branch scores, identifying emotions correctly and understanding emotions, fell in the less than average range. This data, as well as the analysis of correlation with clinical self-report scores, suggest recommendations applicable to educators of clinical nursing students. Conclusions: Registered nurses make-up the largest segment of the ever-growing healthcare workforce. Yet, retention of new graduates has historically been a challenge for the profession. Given the projected employment growth in nursing, it is important to identify factors which correlate with high levels of performance and job retention among nurses. There is preliminary evidence that EI a nontraditional intelligence measure relates positively not only with retention of clinical staff nurses, but with overall clinical performance as well. PMID:27981096

  12. Neurodevelopment of children under 3 years of age with Smith-Magenis syndrome.

    PubMed

    Wolters, Pamela L; Gropman, Andrea L; Martin, Staci C; Smith, Michaele R; Hildenbrand, Hanna L; Brewer, Carmen C; Smith, Ann C M

    2009-10-01

    Systematic data regarding early neurodevelopmental functioning in Smith-Magenis syndrome are limited. Eleven children with Smith-Magenis syndrome less than 3 years of age (mean, 19 months; range, 5-34 months) received prospective multidisciplinary assessments using standardized measures. The total sample scored in the moderately to severely delayed range in cognitive functioning, expressive language, and motor skills and exhibited generalized hypotonia, oral-motor abnormalities, and middle ear dysfunction. Socialization skills were average, and significantly higher than daily living, communication, and motor abilities, which were below average. Mean behavior ratings were in the nonautistic range. According to exploratory analyses, the toddler subgroup scored significantly lower than the infant subgroup in cognition, expressive language, and adaptive behavior, suggesting that the toddlers were more delayed than the infants relative to their respective peers. Infants aged approximately 1 year or younger exhibited cognitive, language, and motor skills that ranged from average to delayed, but with age-appropriate social skills and minimal maladaptive behaviors. At ages 2 to 3 years, the toddlers consistently exhibited cognitive, expressive language, adaptive behavior, and motor delays and mildly to moderately autistic behaviors. Combining age groups in studies may mask developmental and behavioral differences. Increased knowledge of these early neurodevelopmental characteristics should facilitate diagnosis and appropriate intervention.

  13. Star-formation rate in compact star-forming galaxies

    NASA Astrophysics Data System (ADS)

    Izotova, I. Y.; Izotov, Y. I.

    2018-03-01

    We use the data for the Hβ emission-line, far-ultraviolet (FUV) and mid-infrared 22 μm continuum luminosities to estimate star formation rates < SFR > averaged over the galaxy lifetime for a sample of about 14000 bursting compact star-forming galaxies (CSFGs) selected from the Data Release 12 (DR12) of the Sloan Digital Sky Survey (SDSS). The average coefficient linking < SFR > and the star formation rate SFR0 derived from the Hβ luminosity at zero starburst age is found to be 0.04. We compare < SFR > s with some commonly used SFRs which are derived adopting a continuous star formation during a period of {˜} 100 Myr, and find that the latter ones are 2-3 times higher. It is shown that the relations between SFRs derived using a geometric mean of two star-formation indicators in the UV and IR ranges and reduced to zero starburst age have considerably lower dispersion compared to those with single star-formation indicators. We suggest that our relations for < SFR > determination are more appropriate for CSFGs because they take into account a proper temporal evolution of their luminosities. On the other hand, we show that commonly used SFR relations can be applied for approximate estimation within a factor of {˜} 2 of the < SFR > averaged over the lifetime of the bursting compact galaxy.

  14. Daily radionuclide ingestion and internal radiation doses in Aomori prefecture, Japan.

    PubMed

    Ohtsuka, Yoshihito; Kakiuchi, Hideki; Akata, Naofumi; Takaku, Yuichi; Hisamatsu, Shun'ichi

    2013-10-01

    To assess internal annual dose in the general public in Aomori Prefecture, Japan, 80 duplicate cooked diet samples, equivalent to the food consumed over a 400-d period by one person, were collected from 100 volunteers in Aomori City and the village of Rokkasho during 2006–2010 and were analyzed for 11 radionuclides. To obtain average rates of ingestion of radionuclides, the volunteers were selected from among office, fisheries, agricultural, and livestock farm workers. Committed effective doses from ingestion of the diet over a 1-y period were calculated from the analytical results and from International Commission on Radiological Protection dose coefficients; for 40K, an internal effective dose rate from the literature was used. Fisheries workers had significantly higher combined internal annual dose than the other workers, possibly because of high rates of ingestion of marine products known to have high 210Po concentrations. The average internal dose rate, weighted by the numbers of households in each worker group in Aomori Prefecture, was estimated at 0.47 mSv y-1. Polonium-210 contributed 49% of this value. The sum of committed effective dose rates for 210Po, 210Pb, 228Ra, and 14C and the effective dose rate of 40K accounted for approximately 99% of the average internal dose rate.

  15. Emotional intelligence and clinical performance/retention of nursing students.

    PubMed

    Marvos, Chelsea; Hale, Frankie B

    2015-01-01

    This exploratory, quantitative, descriptive study was undertaken to explore the relationship between clinical performance and anticipated retention in nursing students. After approval by the university's Human Subjects Committee, a sample of 104 nursing students were recruited for this study, which involved testing with a valid and reliable emotional intelligence (EI) instrument and a self-report survey of clinical competencies. Statistical analysis revealed that although the group average for total EI score and the 6 score subsets were in the average range, approximately 30% of the individual total EI scores and 30% of two branch scores, identifying emotions correctly and understanding emotions, fell in the less than average range. This data, as well as the analysis of correlation with clinical self-report scores, suggest recommendations applicable to educators of clinical nursing students. Registered nurses make-up the largest segment of the ever-growing healthcare workforce. Yet, retention of new graduates has historically been a challenge for the profession. Given the projected employment growth in nursing, it is important to identify factors which correlate with high levels of performance and job retention among nurses. There is preliminary evidence that EI a nontraditional intelligence measure relates positively not only with retention of clinical staff nurses, but with overall clinical performance as well.

  16. Filter-based measurement of light absorption by brown carbon in PM2.5 in a megacity in South China.

    PubMed

    Li, Sheng; Zhu, Ming; Yang, Weiqiang; Tang, Mingjin; Huang, Xueliang; Yu, Yuegang; Fang, Hua; Yu, Xu; Yu, Qingqing; Fu, Xiaoxin; Song, Wei; Zhang, Yanli; Bi, Xinhui; Wang, Xinming

    2018-08-15

    Carbonaceous aerosols represent an important nexus between air pollution and climate change. Here we collected filter-based PM 2.5 samples during summer and autumn in 2015 at one urban and two rural sites in Guangzhou, a megacity in southern China, and got the light absorption by black carbon (BC) and brown carbon (BrC) resolved with a DRI Model 2015 multi-wavelength thermal/optical carbon analyzer apart from determining the organic carbon (OC) and elemental carbon (EC) contents. On average BrC contributed 12-15% of the measured absorption at 405nm (LA 405 ) during summer and 15-19% during autumn with significant increase in the LA 405 by BrC at the rural sites. Carbonaceous aerosols, identified as total carbon (TC), yielded average mass absorption efficiency at 405nm (MAE 405 ) that were approximately 45% higher in autumn than in summer, an 83% increase was noted in the average MAE 405 for OC, compared with an increase of only 14% in the average MAE 405 for EC. The LA 405 by BrC showed a good correlation (p<0.001) with the ratios of secondary OC to PM 2.5 in summer. However, this correlation was poor (p>0.1) in autumn, implying greater secondary formation of BrC in summer. The correlations between levoglucosan (a marker of biomass burning) and the LA 405 by BrC were significant during autumn but insignificant during summer, suggesting that the observed increase in the LA 405 by BrC during autumn in rural areas was largely related to biomass burning. The measurements of light absorption at 550nm presented in this study indicated that the use of the IMPROVE algorithm with an MAE value of 10m 2 /g for EC to approximate light absorption may be appropriate in areas not strongly affected by fossil fuel combustion; however, this practice would underestimate the absorption of light by PM 2.5 in areas heavily affected by vehicle exhausts and coal burning. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Comparison of arterial potassium and ventilatory dynamics during sinusoidal work rate variation in man.

    PubMed Central

    Casaburi, R; Stringer, W W; Singer, E

    1995-01-01

    1. The mechanisms underlying the exercise hyperpnoea have been difficult to define. Recently it has been suggested that exercise ventilation (VE) changes in proportion to changes in arterial potassium concentration ([K+]a). Similar VE and [K+]a time courses following work rate changes have been cited as supporting evidence. This study compared [K+]a and VE dynamics during moderate exercise in man. 2. We observed VE and gas exchange responses in five healthy men to sinusoidal work rate variation between 25 and approximately 105 W. Tests of approximately 30 min duration were performed at sinusoidal periods of 9, 6 and 3 min and in the steady state. In each test, during two or three sine periods, arterial blood was sampled (24 per test) and analysed for [K+] and blood gases. Response amplitude and phase (relative to work rate) were determined for each variable. 3. [K+]a fluctuated in response to sinusoidal work rate forcing with mean-to-peak amplitude averaging 0.15 mmol 1(-1). However, among tests, VE amplitude and phase were not highly correlated with [K+]a (r = 0.36 and 0.67, respectively). Further, average [K+]a amplitude in the 9 and 6 min sinusoidal studies tended to exceed the steady-state amplitude, while average VE amplitude fell progressively with increasing forcing frequency. The dissimilar dynamics of [K+]a and VE seem inconsistent with a major role for [K+]a as a proportional controller of ventilation during non-steady state moderate exercise in man. 4. Among tests, VE and CO2 output (VCO2) amplitude and phase were closely correlated (r = 0.87 and 0.94, respectively). Further, arterial CO2 pressure (Pa,CO2) and arterial pH(pHa) did not fluctuate significantly in ten of twenty and thirteen of twenty studies, respectively. In tests where sinusoidal fluctuation was detected, amplitude averaged 1.1 mmHg and 0.008 units, respectively. Thus VE demonstrated a close dynamic coupling to CO2 output, with consequent tight regulation of Pa,CO2 and pHa. PMID:7666376

  18. Developmental histories of perceived racial discrimination and diurnal cortisol profiles in adulthood: A 20-year prospective study

    PubMed Central

    Adam, Emma K.; Heissel, Jennifer A.; Zeiders, Katharine H.; Richeson, Jennifer A.; Ross, Emily C.; Ehrlich, Katherine B.; Levy, Dorainne J.; Kemeny, Margaret; Brodish, Amanda B.; Malanchuk, Oksana; Peck, Stephen C.; Fuller-Rowell, Thomas E.; Eccles, Jacquelynne S.

    2015-01-01

    Perceived racial discrimination (PRD) has been associated with altered diurnal cortisol rhythms in past cross-sectional research. We investigate whether developmental histories of PRD, assessed prospectively, are associated with adult diurnal cortisol profiles. One-hundred and twelve (N = 50 Black, N = 62 White) adults from the Maryland Adolescent Development in Context Study provided saliva samples in adulthood (at approximately age 32 years) at waking, 30 min after waking, and at bedtime for 7 days. Diurnal cortisol measures were calculated, including waking cortisol levels, diurnal cortisol slopes, the cortisol awakening response (CAR), and average daily cortisol (AUC). These cortisol outcomes were predicted from measures of PRD obtained over a 20-year period beginning when individuals were in 7th grade (approximately age 12). Greater average PRD measured across the 20-year period predicted flatter adult diurnal cortisol slopes for both Black and White adults, and a lower CAR. Greater average PRD also predicted lower waking cortisol for Black, but not White adults. PRD experiences in adolescence accounted for many of these effects. When adolescent and young adult PRD are entered together predicting cortisol outcomes, PRD experiences in adolescence (but not young adulthood) significantly predicted flatter diurnal cortisol slopes for both Black and White adults. Adolescent, but not young adult PRD, also significantly predicted lower waking and lower average cortisol for Black adults. Young adult PRD was, however, a stronger predictor of the CAR, predicting a marginally lower CAR for Whites, and a significantly larger CAR for Blacks. Effects were robust to controlling for covariates including health behaviors, depression, income and parent education levels. PRD experiences interacted with parent education and income to predict aspects of the diurnal cortisol rhythm. Although these results suggest PRD influences on cortisol for both Blacks and Whites, the key findings suggest that the effects are more pervasive for Blacks, affecting multiple aspects of the cortisol diurnal rhythm. In addition, adolescence is a more sensitive developmental period than adulthood for the impacts of PRD on adult stress biology. PMID:26352481

  19. Dry deposition of gaseous oxidized mercury in Western Maryland.

    PubMed

    Castro, Mark S; Moore, Chris; Sherwell, John; Brooks, Steve B

    2012-02-15

    The purpose of this study was to directly measure the dry deposition of gaseous oxidized mercury (GOM) in western Maryland. Annual estimates were made using passive ion-exchange surrogate surfaces and a resistance model. Surrogate surfaces were deployed for seventeen weekly sampling periods between September 2009 and October 2010. Dry deposition rates from surrogate surfaces ranged from 80 to 1512 pgm(-2)h(-1). GOM dry deposition rates were strongly correlated (r(2)=0.75) with the weekly average atmospheric GOM concentrations, which ranged from 2.3 to 34.1 pgm(-3). Dry deposition of GOM could be predicted from the ambient air concentrations of GOM using this equation: GOM dry deposition (pgm(-2)h(-1))=43.2 × GOM concentration-80.3. Dry deposition velocities computed using GOM concentrations and surrogate surface GOM dry deposition rates, ranged from 0.2 to 1.7 cms(-1). Modeled dry deposition rates were highly correlated (r(2)=0.80) with surrogate surface dry deposition rates. Using the overall weekly average surrogate surface dry deposition rate (369 ± 340 pg m(-2)h(-1)), we estimated an annual GOM dry deposition rate of 3.2 μg m(-2)year(-1). Using the resistance model, we estimated an annual GOM dry deposition rate of 3.5 μg m(-2)year(-1). Our annual GOM dry deposition rates were similar to the dry deposition (3.3 μg m(-2)h(-1)) of gaseous elemental mercury (GEM) at our site. In addition, annual GOM dry deposition was approximately 1/2 of the average annual wet deposition of total mercury (7.7 ± 1.9 μg m(-2)year(-1)) at our site. Total annual mercury deposition from dry deposition of GOM and GEM and wet deposition was approximately 14.4 μg m(-2)year(-1), which was similar to the average annual litterfall deposition (15 ± 2.1 μg m(-2)year(-1)) of mercury, which was also measured at our site. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. An Appropriate Cutoff Value for Determining the Colonization of Helicobacter pylori by the Pyrosequencing Method: Comparison with Conventional Methods.

    PubMed

    Kim, Jaeyeon; Kim, Nayoung; Jo, Hyun Jin; Park, Ji Hyun; Nam, Ryoung Hee; Seok, Yeong-Jae; Kim, Yeon-Ran; Kim, Joo Sung; Kim, Jung Mogg; Kim, Jung Min; Lee, Dong Ho; Jung, Hyun Chae

    2015-10-01

    Sequencing of 16S ribosomal RNA (rRNA) gene has improved the characterization of microbial communities. It enabled the detection of low abundance gastric Helicobacter pylori sequences even in subjects that were found to be H. pylori negative with conventional methods. The objective of this study was to obtain a cutoff value for H. pylori colonization in gastric mucosa samples by pyrosequencing method. Gastric mucosal biopsies were taken from 63 subjects whose H. pylori status was determined by a combination of serology, rapid urease test, culture, and histology. Microbial DNA from mucosal samples was amplified by PCR using universal bacterial primers. 16S rDNA amplicons were pyrosequenced. ROC curve analysis was performed to determine the cutoff value for H. pylori colonization by pyrosequencing. In addition, temporal changes in the stomach microbiota were observed in eight initially H. pylori-positive and eight H. pylori-negative subjects at a single time point 1-8 years later. Of the 63 subjects, the presence of H. pylori sequences was detected in all (28/28) conventionally H. pylori-positive samples and in 60% (21/35) of H. pylori-negative samples. The average percent of H. pylori reads in each sample was 0.67 ± 1.09% in the H. pylori-negative group. Cutoff value for clinically positive H. pylori status was approximately 1.22% based on ROC curve analysis (AUC = 0.957; p < .001). Helicobacter pylori was successfully eradicated in five of seven treated H. pylori-positive subjects (71.4%), and the percentage of H. pylori reads in these five subjects dropped from 1.3-95.18% to 0-0.16% after eradication. These results suggest that the cutoff value of H. pylori sequence percentage for H. pylori colonization by pyrosequencing could be set at approximately 1%. It might be helpful to analyze gastric microbiota related to H. pylori sequence status. © 2015 John Wiley & Sons Ltd.

  1. A non-invasive assessment of skin carotenoid status through reflection spectroscopy is a feasible, reliable and potentially valid measure of fruit and vegetable consumption in a diverse community sample.

    PubMed

    Jilcott Pitts, Stephanie Bell; Jahns, Lisa; Wu, Qiang; Moran, Nancy E; Bell, Ronny A; Truesdale, Kimberly P; Laska, Melissa N

    2018-06-01

    To assess the feasibility, reliability and validity of reflection spectroscopy (RS) to assess skin carotenoids in a racially diverse sample. Study 1 was a cross-sectional study of corner store customers (n 479) who completed the National Cancer Institute Fruit and Vegetable Screener as well as RS measures. Feasibility was assessed by examining the time it took to complete three RS measures, reliability was assessed by examining the variation between three RS measures, and validity was examined by correlation with self-reported fruit and vegetable consumption. In Study 2, validity was assessed in a smaller sample (n 30) by examining associations between RS measures and dietary carotenoids, fruits and vegetables as calculated from a validated FFQ and plasma carotenoids. Eastern North Carolina, USA. It took on average 94·0 s to complete three RS readings per person. The average variation between three readings for each participant was 6·8 %. In Study 2, in models adjusted for age, race and sex, there were statistically significant associations between RS measures and (i) FFQ-estimated carotenoid intake (P<0·0001); (ii) FFQ-estimated fruit and vegetable consumption (P<0·010); and (iii) plasma carotenoids (P<0·0001). RS is a potentially improved method to approximate fruit and vegetable consumption among diverse participants. RS is portable and easy to use in field-based public health nutrition settings. More research is needed to investigate validity and sensitivity in diverse populations.

  2. The effects of severe mixed environmental pollution on human chromosomes.

    PubMed Central

    Katsantoni, A; Nakou, S; Antoniadou-Koumatou, I; Côté, G B

    1986-01-01

    Cytogenetic studies were conducted on healthy young mothers, shortly after child birth, in two residential areas each with an approximate population of 20,000, situated about 25 km from Athens, Greece. One of the areas, Elefsis, is subject to severe mixed industrial pollution, and the other, Koropi, is relatively free of pollution. Chromosomal aberrations were investigated in 16 women from each area in 72 hour lymphocyte cultures treated with gentian violet to enhance any chromosomal instability induced by the pollution. The women were of a comparable socioeconomic level, aged between 20 and 31 years, and with no history of factors associated with mutagenesis. Venous blood samples were taken from the two groups and processed concurrently. The slides were coded and examined independently by two observers, who were unaware of the source of the samples. A total of 100 cells was examined on each sample. The two observers obtained highly comparable results. Women from Elefsis had an average of 0.42 anomalies per cell and those from Koropi had 0.39. The absence of a statistically significant difference between the two groups clearly shows that the severe mixed environmental pollution of Elefsis has no significant visible effect on human chromosomes in most residents. However, two Elefsis women had abnormal results and could be at risk. Their presence is not sufficient to raise significantly their group's average, but the induction by pollution of an increased rate of chromosomal anomalies in only a few people at risk could account for the known association between urban residence and cancer mortality. PMID:3783622

  3. Dry matter losses and quality changes during short rotation coppice willow storage in chip or rod form.

    PubMed

    Whittaker, Carly; Yates, Nicola E; Powers, Stephen J; Misselbrook, Tom; Shield, Ian

    2018-05-01

    This study compares dry matter losses and quality changes during the storage of SRC willow as chips and as rods. A wood chip stack consisting of approximately 74 tonnes of fresh biomass, or 31 tonnes dry matter (DM) was built after harvesting in the spring. Three weeks later, four smaller stacks of rods with an average weight of 0.8 tonnes, or 0.4 tonnes DM were built. During the course of the experiment temperature recorders placed in the stacks found that the wood chip pile reached 60 °C within 10 days of construction, but the piles of rods remained mostly at ambient temperatures. Dry matter losses were calculated by using pre-weighed independent samples within the stacks and by weighing the whole stack before and after storage. After 6 months the wood chip stack showed a DM loss of between 19.8 and 22.6%, and mean losses of 23.1% were measured from the 17 independent samples. In comparison, the rod stacks showed an average stack DM loss of between 0 and 9%, and between 1.4% and 10.6% loss from the independent samples. Analysis of the stored material suggests that storing willow in small piles of rods produces a higher quality fuel in terms of lower moisture and ash content; however, it has a higher fine content compared to storage in chip form. Therefore, according to the two storage methods tested here, there may be a compromise between maximising the net dry matter yield from SRC willow and the final fine content of the fuel.

  4. Atmospheric wet deposition of trace elements to a suburban environment, Reston, Virginia, USA

    USGS Publications Warehouse

    Conko, Kathryn M.; Rice, Karen C.; Kennedy, Margaret M.

    2004-01-01

    Wet deposition from a suburban area in Reston, Virginia was collected during 1998 and analyzed to assess the anion and trace-element concentrations and depositions. Suburban Reston, approximately 26 km west of Washington, DC, is densely populated and heavily developed. Wet deposition was collected bi-weekly in an automated collector using trace-element clean sampling and analytical techniques. The annual volume-weighted concentrations of As, Cd, and Pb were similar to those previously reported for a remote site on Catoctin Mt., Maryland (70 km northwest), which indicated a regional signal for these elements. The concentrations and depositions of Cu and Zn at the suburban site were nearly double those at remote sites because of the influence of local vehicular traffic. The 1998 average annual wet deposition (μg m−2 yr−1) was calculated for Al (52,000), As (94), Cd (54), Cr (160), Cu (700), Fe (23,000), Mn (2000), Ni (240), Pb (440), V (430), and Zn (4100). The average annual wet deposition (meq m−2 yr−1) was calculated for H+ (74), Cl− (8.5), NO3− (33), and SO42− (70). Analysis of digested total trace-element concentrations in a subset of samples showed that the refractory elements in suburban precipitation comprised a larger portion of the total deposition of trace elements than in remote areas.

  5. Scanning electron microscopy observations of the interaction between Trichoderma harzianum and perithecia of Gibberella zeae.

    PubMed

    Inch, S; Gilbert, J

    2011-01-01

    Chronological events associated with the interaction between a strain of Trichoderma harzianum, T472, with known biological control activity against perithecial production of G. zeae, were studied with scanning electron microscopy to investigate the mechanisms of control. Large clusters of perithecia consisting of 5-15 perithecia formed on the autoclaved, mulched wheat straw inoculated with G. zeae alone (control) with an average of 157 perithecia per plate. Small clusters consisting of 3-6 and an average of 15 perithecia per plate perithecia formed on straw that was treated with T. harzianum. The mature perithecia from straw treated with T. harzianum produced less pigment and were lighter in color than those from the control plates. Furthermore the cells of the outer wall of these perithecia were abnormal in appearance and unevenly distributed across the surface. Immature perithecia were colonized by T. harzianum approximately 15 d after inoculation (dai) with the biocontrol agent and pathogen. Few perithecia were colonized at later stages. The affected perithecia collapsed 21 dai, compared to the perithecia in the control samples that began to collapse 28 dai. Abundant mycelium of T. harzianum was seen on the perithecia of treated samples. Perithecial structures may be resistant to penetration by the mycelium because direct penetration was not observed. Trichoderma harzianum colonized the substrate quickly and out-competed the pathogen, G. zeae.

  6. Modified Maturity Offset Prediction Equations: Validation in Independent Longitudinal Samples of Boys and Girls.

    PubMed

    Kozieł, Sławomir M; Malina, Robert M

    2018-01-01

    Predicted maturity offset and age at peak height velocity are increasingly used with youth athletes, although validation studies of the equations indicated major limitations. The equations have since been modified and simplified. The objective of this study was to validate the new maturity offset prediction equations in independent longitudinal samples of boys and girls. Two new equations for boys with chronological age and sitting height and chronological age and stature as predictors, and one equation for girls with chronological age and stature as predictors were evaluated in serial data from the Wrocław Growth Study, 193 boys (aged 8-18 years) and 198 girls (aged 8-16 years). Observed age at peak height velocity for each youth was estimated with the Preece-Baines Model 1. The original prediction equations were included for comparison. Predicted age at peak height velocity was the difference between chronological age at prediction and maturity offset. Predicted ages at peak height velocity with the new equations approximated observed ages at peak height velocity in average maturing boys near the time of peak height velocity; a corresponding window for average maturing girls was not apparent. Compared with observed age at peak height velocity, predicted ages at peak height velocity with the new and original equations were consistently later in early maturing youth and earlier in late maturing youth of both sexes. Predicted ages at peak height velocity with the new equations had reduced variation compared with the original equations and especially observed ages at peak height velocity. Intra-individual variation in predicted ages at peak height velocity with all equations was considerable. The new equations are useful for average maturing boys close to the time of peak height velocity; there does not appear to be a clear window for average maturing girls. The new and original equations have major limitations with early and late maturing boys and girls.

  7. Soliton evolution and radiation loss for the sine-Gordon equation.

    PubMed

    Smyth, N F; Worthy, A L

    1999-08-01

    An approximate method for describing the evolution of solitonlike initial conditions to solitons for the sine-Gordon equation is developed. This method is based on using a solitonlike pulse with variable parameters in an averaged Lagrangian for the sine-Gordon equation. This averaged Lagrangian is then used to determine ordinary differential equations governing the evolution of the pulse parameters. The pulse evolves to a steady soliton by shedding dispersive radiation. The effect of this radiation is determined by examining the linearized sine-Gordon equation and loss terms are added to the variational equations derived from the averaged Lagrangian by using the momentum and energy conservation equations for the sine-Gordon equation. Solutions of the resulting approximate equations, which include loss, are found to be in good agreement with full numerical solutions of the sine-Gordon equation.

  8. Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter.

    PubMed

    Johnson, W R; Nilsen, J

    2016-03-01

    The influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity and also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.

  9. Validity of the site-averaging approximation for modeling the dissociative chemisorption of H{sub 2} on Cu(111) surface: A quantum dynamics study on two potential energy surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Tianhui; Fu, Bina, E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H., E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn

    A new finding of the site-averaging approximation was recently reported on the dissociative chemisorption of the HCl/DCl+Au(111) surface reaction [T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 139, 184705 (2013); T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 140, 144701 (2014)]. Here, in order to investigate the dependence of new site-averaging approximation on the initial vibrational state of H{sub 2} as well as the PES for the dissociative chemisorption of H{sub 2} on Cu(111) surface at normal incidence, we carried out six-dimensional quantum dynamics calculations using the initial state-selected time-dependent wave packet approach, withmore » H{sub 2} initially in its ground vibrational state and the first vibrational excited state. The corresponding four-dimensional site-specific dissociation probabilities are also calculated with H{sub 2} fixed at bridge, center, and top sites. These calculations are all performed based on two different potential energy surfaces (PESs). It is found that the site-averaging dissociation probability over 15 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability for H{sub 2} (v = 0) and (v = 1) on the two PESs.« less

  10. Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter

    DOE PAGES

    Johnson, W. R.; Nilsen, J.

    2016-03-14

    Here, the influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity andmore » also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.« less

  11. Water 16-mers and hexamers: assessment of the three-body and electrostatically embedded many-body approximations of the correlation energy or the nonlocal energy as ways to include cooperative effects.

    PubMed

    Qi, Helena W; Leverentz, Hannah R; Truhlar, Donald G

    2013-05-30

    This work presents a new fragment method, the electrostatically embedded many-body expansion of the nonlocal energy (EE-MB-NE), and shows that it, along with the previously proposed electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), produces accurate results for large systems at the level of CCSD(T) coupled cluster theory. We primarily study water 16-mers, but we also test the EE-MB-CE method on water hexamers. We analyze the distributions of two-body and three-body terms to show why the many-body expansion of the electrostatically embedded correlation energy converges faster than the many-body expansion of the entire electrostatically embedded interaction potential. The average magnitude of the dimer contributions to the pairwise additive (PA) term of the correlation energy (which neglects cooperative effects) is only one-half of that of the average dimer contribution to the PA term of the expansion of the total energy; this explains why the mean unsigned error (MUE) of the EE-PA-CE approximation is only one-half of that of the EE-PA approximation. Similarly, the average magnitude of the trimer contributions to the three-body (3B) term of the EE-3B-CE approximation is only one-fourth of that of the EE-3B approximation, and the MUE of the EE-3B-CE approximation is one-fourth that of the EE-3B approximation. Finally, we test the efficacy of two- and three-body density functional corrections. One such density functional correction method, the new EE-PA-NE method, with the OLYP or the OHLYP density functional (where the OHLYP functional is the OptX exchange functional combined with the LYP correlation functional multiplied by 0.5), has the best performance-to-price ratio of any method whose computational cost scales as the third power of the number of monomers and is competitive in accuracy in the tests presented here with even the electrostatically embedded three-body approximation.

  12. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  13. DNA sequence variation and selection of tag single-nucleotide polymorphisms at candidate genes for drought-stress response in Pinus taeda L.

    PubMed

    González-Martínez, Santiago C; Ersoz, Elhan; Brown, Garth R; Wheeler, Nicholas C; Neale, David B

    2006-03-01

    Genetic association studies are rapidly becoming the experimental approach of choice to dissect complex traits, including tolerance to drought stress, which is the most common cause of mortality and yield losses in forest trees. Optimization of association mapping requires knowledge of the patterns of nucleotide diversity and linkage disequilibrium and the selection of suitable polymorphisms for genotyping. Moreover, standard neutrality tests applied to DNA sequence variation data can be used to select candidate genes or amino acid sites that are putatively under selection for association mapping. In this article, we study the pattern of polymorphism of 18 candidate genes for drought-stress response in Pinus taeda L., an important tree crop. Data analyses based on a set of 21 putatively neutral nuclear microsatellites did not show population genetic structure or genomewide departures from neutrality. Candidate genes had moderate average nucleotide diversity at silent sites (pi(sil) = 0.00853), varying 100-fold among single genes. The level of within-gene LD was low, with an average pairwise r2 of 0.30, decaying rapidly from approximately 0.50 to approximately 0.20 at 800 bp. No apparent LD among genes was found. A selective sweep may have occurred at the early-response-to-drought-3 (erd3) gene, although population expansion can also explain our results and evidence for selection was not conclusive. One other gene, ccoaomt-1, a methylating enzyme involved in lignification, showed dimorphism (i.e., two highly divergent haplotype lineages at equal frequency), which is commonly associated with the long-term action of balancing selection. Finally, a set of haplotype-tagging SNPs (htSNPs) was selected. Using htSNPs, a reduction of genotyping effort of approximately 30-40%, while sampling most common allelic variants, can be gained in our ongoing association studies for drought tolerance in pine.

  14. Correlation by Rb-Sr geochronology of garnet growth histories from different structural levels within the Tauern Window, Eastern Alps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, John N.; Selverstone, Jane; Rosenfeld, John L.

    1993-06-01

    In order to evaluate rates of tectonometamorphic processes, growth rates of garnets from metamorphic rocks of the Tauern Window, Eastern Alps were measured using Rb-Sr isotopes. The garnet growth rates were determined from Rb-Sr isotopic zonation of single garnet crystals and the Rb-Sr isotopic compositions of their associated rock matrices. Garnets were analyzed from the Upper Schieferhulle (USH) and Lower Schieferhulle (LSH) within the Tauern Window. Two garnets from the USH grew at rates of 0.67(-0.13)+0.19 mm/million years and 0.88(-0.19)+0.34 mm/million years, respectively, indicating an average growth duration of 5.4 +- 1.7 million years. The duration of growth coupled withmore » the amount of rotation recorded by inclusion trails in the USH garnets yields an average shear-strain rate during garnet growth of 2.7(-0.7)+1.2 x 10(-14) s-1 . Garnet growth in the sample from the USH occurred between 35.4 +- 0.6 and 30 +- 0.8 Ma. The garnet from the LSH grew at a rate of 0.23 +- 0.015 mm/mil lion years, between 62 +- 1.5 Ma and 30.2 +- 1.5 Ma. Contemporaneous cessation of garnet growth in both units at approximately 30 Ma is in accord with previous dating of the thermal peak of metamorphism in the Tauern Window. Correlation with previously published pressure-temperature paths for garnets from the USH and LSH yields approximate rates of burial, exhumation and heating during garnet growth. Assuming that these P - T paths are applicable to the garnets in this study, the contemporaneous exhumation rates recorded by garnet in the USH and LSH were approximately 4(-2)+3 mm/year and 2 +- 1 mm/year, respectively. [References: 34]« less

  15. Random vs. systematic sampling from administrative databases involving human subjects.

    PubMed

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  16. The closure approximation in the hierarchy equations.

    NASA Technical Reports Server (NTRS)

    Adomian, G.

    1971-01-01

    The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.

  17. High-throughput sequencing analysis of the bacteria in the dust storm which passed over Canberra, Australia on 22-23 September 2009

    NASA Astrophysics Data System (ADS)

    Munday, Chris; De Deckker, Patrick; Tapper, Nigel; Allison, Gwen

    2014-05-01

    Following a prolonged drought in Australia in the first decade of the 21st century, several dust storms affected the heavily populated East coast of Australia. The largest such storm occurred on 22-23 September 2009 and had a front of an estimated 3000km. A 24hr average PM10 concentration of over 2,000μg/m3 was recorded in several locations and an hourly peak of over 15,000μg/m3 was recorded (Leys et al. 2011). Over two time periods duplicate aerosol samples were collected on 47mm diameter cellulose nitrate membranes at a location removed from anthropogenic influences. One set of samples was collected in the afternoon the dust event started and another was collected overnight. Additionally, overnight rainfall was collected in a sterile bottle.DNA was directly extracted one membrane from each time point for molecular cloning and high throughput sequencing, while the other was cultivated on Tryptic Soy Agar (TSA). High throughput sequencing was performed using the 454 Titanium platform. From the three samples, 19,945 curated sequences were obtained representing 942 OTUS, with the three samples approximately equal in number. Unclassified Rhizobiales and Stenotrophomonas were the most abundant groups which could be attributed names. A total of 942 OTUs were identified (cutoff = 0.03), and despite the temporal relation of the samples, only eleven were found in all three samples, indicating that the dust storm evolved in composition as it passed over the region. Approximately 800 and 500 CFU/m3 were found in the two cultivated samples, tenfold more than was collected from previous dust events (Lim et al, 2011). Identification of cultivars revealed a dominance of the gram positive Firmicutes phylum, while the clone library showed a more even distribution of taxa, with Actinobacteria the most common and Firmicutes comprising less than 10% of sequences. Collectively, the analyses indicate that the concentration of cultivable organisms during the dust storm dramatically relative to calm conditions. A diverse and variable population of microorganisms were present reflecting the vast source and dynamic nature of the storm.

  18. A hydroxyapatite coating covalently linked onto a silicone implant material.

    PubMed

    Furuzono, T; Sonoda, K; Tanaka, J

    2001-07-01

    A novel composite consisting of hydroxyapatite (HAp) microparticles covalently coupled onto a silicone sheet was developed. Initially, an acrylic acid (AAc) -grafted silicone sheet with a 16.7 microg/cm(2) surface graft density was prepared by corona-discharge treatment. The surface of sintered, spherical, carbonated HAp particles with an average diameter of 2.0 microm was subsequently modified with amino groups. The amino group surface density of the HAp particles was calculated to be approximately one amino molecule per 1.0 nm(2) of particle surface area. These samples were characterized with Fourier transform infrared spectrometry and X-ray photoelectron spectroscopy. After the formation of ammonium ionic bonds between both samples under aqueous conditions, they were reacted at 180 degrees C for 6 h in vacuo to form covalent bonds through a solid-phase condensation. The HAp particles were coupled to the AAc-grafted silicone surface by a covalent linkage. Further improvements in the adhesive and bioactive properties of the HAp-coated silicone material are expected.

  19. Microplastic pollution in the surface waters of the Laurentian Great Lakes.

    PubMed

    Eriksen, Marcus; Mason, Sherri; Wilson, Stiv; Box, Carolyn; Zellers, Ann; Edwards, William; Farley, Hannah; Amato, Stephen

    2013-12-15

    Neuston samples were collected at 21 stations during an ~700 nautical mile (~1300 km) expedition in July 2012 in the Laurentian Great Lakes of the United States using a 333 μm mesh manta trawl and analyzed for plastic debris. Although the average abundance was approximately 43,000 microplastic particles/km², station 20, downstream from two major cities, contained over 466,000 particles/km², greater than all other stations combined. SEM analysis determined nearly 20% of particles less than 1 mm, which were initially identified as microplastic by visual observation, were aluminum silicate from coal ash. Many microplastic particles were multi-colored spheres, which were compared to, and are suspected to be, microbeads from consumer products containing microplastic particles of similar size, shape, texture and composition. The presence of microplastics and coal ash in these surface samples, which were most abundant where lake currents converge, are likely from nearby urban effluent and coal burning power plants.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, David R.; Morrow, Benjamin M.; Trujillo, Carl P.

    Here, we present a series of experiments probing the martensitic α–ω (hexagonal close-packed to simple hexagonal) transition in titanium under shock-loading to peak stresses around 15 GPa. Gas-gun plate impact techniques were used to locate the α–ω transition stress with a laser-based velocimetry diagnostic. A change in the shock-wave profile at 10.1 GPa suggests the transition begins at this stress. A second experiment shock-loaded and then soft-recovered a similar titanium sample. We then analyzed this recovered material with electron-backscatter diffraction methods, revealing on average approximately 65% retained ω phase. Furthermore, based on careful analysis of the microstructure, we propose thatmore » the titanium never reached a full ω state, and that there was no observed phase-reversion from ω to α. Texture analysis suggests that any α titanium found in the recovered sample is the original α. The data show that both the α and ω phases are stable and can coexist even though the shock-wave presents as steady-state, at these stresses.« less

  1. Farmers' Risk Preferences in Rural China: Measurements and Determinants.

    PubMed

    Jin, Jianjun; He, Rui; Gong, Haozhou; Xu, Xia; He, Chunyang

    2017-06-30

    This study measures farmers' risk attitudes in rural China using a survey instrument and a complementary experiment conducted in the field with the same sample of subjects. Using a question asking people about their willingness to take risks "in general", we found that the average response of our sample is slightly risk averse. Farmers' exogenous factors (age, gender, and height) and self-reported happiness have a significant impact on farmers' willingness to take risks. The experiment results show that approximately 44% of farmers in the study area are risk averse. We compare farmers' self-reported measures of risk preferences derived from the survey instrument to preferences elicited through the experimental task. Results show that answers to the general risk attitude question in the survey can predict farmers' behaviors in the experiment to a statistically significant degree. This paper can contribute to the empirical literature on comparing local farmers' risk attitudes across different risk preference measurement methods in the developing world.

  2. A Urea Biosensor from Stacked Sol-Gel Films with Immobilized Nile Blue Chromoionophore and Urease Enzyme

    PubMed Central

    Alqasaimeh, Muawia Salameh; Heng, Lee Yook; Ahmad, Musa

    2007-01-01

    An optical urea biosensor was fabricated by stacking several layers of sol-gel films. The stacking of the sol-gel films allowed the immobilization of a Nile Blue chromoionophore (ETH 5294) and urease enzyme separately without the need of any chemical attachment procedure. The absorbance response of the biosensor was monitored at 550 nm, i.e. the deprotonation of the chromoionophore. This multi-layer sol-gel film format enabled higher enzyme loading in the biosensor to be achieved. The urea optical biosensor constructed from three layers of sol-gel films that contained urease demonstrated a much wider linear response range of up to 100 mM urea when compared with biosensors that constructed from 1-2 layers of films. Analysis of urea in urine samples with this optical urea biosensor yielded results similar to that determined by a spectrophotometric method using the reagent p-dimethylaminobenzaldehyde (R2 = 0.982, n = 6). The average recovery of urea from urine samples using this urea biosensor is approximately 103%.

  3. Approximation of the exponential integral (well function) using sampling methods

    NASA Astrophysics Data System (ADS)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  4. Evaluation of a tunable bandpass reaction cell for an inductively coupled plasma mass spectrometer for the determination of chromium and vanadium in serum and urine

    NASA Astrophysics Data System (ADS)

    Nixon, David E.; Neubauer, Kenneth R.; Eckdahl, Steven J.; Butz, John A.; Burritt, Mary F.

    2002-05-01

    A Dynamic Reaction Cell™ inductively coupled argon plasma mass spectrometer (DRC-ICP-MS) was evaluated for the determination of chromium and vanadium in serum and urine. Reaction cell conditions were evaluated for the elimination of ArC + and ClOH + interferences on chromium at mass 52 and OCl + on vanadium at mass 51. A diluent containing only 1% nitric acid and internal standards (Y and Ga) was used to prepare serum and urine for analysis. Instrument response calibration was achieved by using aqueous acidic standards spiked into pooled sera or urine matrices. The slopes of the calibration curves prepared in urine and serum matrices were nearly identical. On average, chromium detection limits are 2.5 times lower using the DRC than Zeeman graphite furnace atomic absorption spectrometry (ZGFAAS). Vanadium detection limits are approximately 50 times lower. Average detection limits achieved with DRC-ICP-MS are 0.075 μg Cr/l and 0.028 μg V/l. Average results for the analysis of National Institute of Standards and Technology Standard Reference Material (NIST SRM) 1598 Bovine Serum (attained over 22 days) are: 0.14 μg Cr/l and 0.068 μg V/l. The reference concentrations for vanadium and chromium in NIST SRM 1598 are (0.06) μg V/l and 0.14±0.08 μg Cr/l, respectively. Results for chromium and vanadium determinations on ICP-MS survey samples from the Toxocologie du Quebec are equivalent to those reported by high resolution inductively coupled plasma mass spectrometry (HR-ICP-MS) for the same survey samples.

  5. Influence of rheumatoid arthritis-related morning stiffness on productivity at work: results from a survey in 11 European countries.

    PubMed

    Mattila, Kalle; Buttgereit, Frank; Tuominen, Risto

    2015-11-01

    The objective of this study was to evaluate the influence of morning stiffness on productivity at work and to estimate the work-related economic consequences of morning stiffness among patients with RA-related morning stiffness in 11 European countries. The original sample comprised 1061 RA patients from 11 European countries (Belgium, Denmark, Finland, France, Germany, Italy, Norway, Poland, Spain, Sweden and UK). They had been diagnosed with RA and experience morning stiffness three or more times per week. Data were collected by interviews. Women comprised 77.9 % of the sample, the average age was 50.4 years, and 84.3 % had RA diagnosed for more than 2 years. Overall costs of RA-related morning stiffness was calculated to be 27,712€ per patient per year, varying from 4965€ in Spain to 66,706€ in Norway. On average, 96 % of the overall production losses were attributed to early retirement, with a markedly lower level (77 %) in Italy than in other countries (p < 0.0001). The proportion of patients who reported retirement due to morning stiffness and productivity losses due to late work arrivals and working while sick showed considerable variation across the countries represented in the study. Overall, the average annual cost of late arrivals (0.8 % of the total costs) was approximately half of the costs attributed to sick leave (1.7 %) and working while sick (1.5 %). Morning stiffness due to RA causes significant production losses and is a significant cost burden throughout Europe. There seem to be notable differences in the impact of morning stiffness on productivity between European countries.

  6. Determination of Mercury Content in a Shallow Firn Core from Summit, Greenland by Isotope Dilution Inductively Coupled Plasma Mass Spectrometry

    NASA Technical Reports Server (NTRS)

    Mann, Jacqueline L.; Long, Stephen E.; Shuman, Christopher A.; Kelly, W. Robert

    2003-01-01

    The total mercury Hg content was determined in 6 cm sections of a near-surface 7 m firn core and in surrounding surface snow from Summit, Greenland (elevation: 3238 m, 72.58 N, 38.53 W) in May 2001 by isotope dilution cold-vapor inductively coupled plasma mass spectrometry (ID-CV-ICP-MS). The focus of this research was to evaluate the capability of the ID-CV-ICPMS technique for measuring trace levels of Hg typical of polar snow and firn. Highly enriched Hg-201 isotopic spike is added to approximately 10 ml melted core and thoroughly mixed. The Hg(+2) in the sample is reduced on line with tin (II) chloride (SnCl2) and the elemental Hg (Hg(0)) vapor pre-concentrated on to gold gauze using a commercial amalgam system. The Hg is then thermally desorbed and introduced into a quadrupole ICP-MS. The blank corrected Hg concentrations determined for all samples ranged from 0.25 ng/L to 1.74 ng/L (ppt) (average 0.59 ng/L plus or minus 0.28 ng/L) and fall within the range of those previously determined by Boutron et al., 1998 (less than or equal to 0.05 ng/L to 2.0 ng/L) for the Summit site. The average blank value was 0.19 ng/L plus or minus 0.045 ng/L (n=6). The Hg values specifically for the firn core range from 0.25 ng/L to 0.87 ng/L (average 0.51 ng/L plus or minus 0.13 ng/L) and show both values declining with time and larger variability in concentration in the top 1.8 m.

  7. Vibrational Raman optical activity of 1-phenylethanol and 1-phenylethylamine: revisiting old friends.

    PubMed

    Kapitán, Josef; Johannessen, Christian; Bour, Petr; Hecht, Lutz; Barron, Laurence D

    2009-01-01

    The samples used for the first observations of vibrational Raman optical activity (ROA) in 1972, namely both enantiomers of 1-phenylethanol and 1-phenylethylamine, have been revisited using a modern commercial ROA instrument together with state-of-the-art ab initio calculations. The simulated ROA spectra reveal for the first time the vibrational origins of the first reported ROA signals, which comprised similar couplets in the alcohol and amine in the spectral range approximately 280-400 cm(-1). The results demonstrate how easy and routine ROA measurements have become, and how current ab initio quantum-chemical calculations are capable of simulating experimental ROA spectra quite closely provided sufficient averaging over accessible conformations is included. Assignment of absolute configuration is, inter alia, completely secure from results of this quality. Anharmonic corrections provided small improvements in the simulated Raman and ROA spectra. The importance of conformational averaging emphasized by this and previous related work provides the underlying theoretical background to ROA studies of dynamic aspects of chiral molecular and biomolecular structure and behavior. (c) 2009 Wiley-Liss, Inc.

  8. Airborne pollen survey in Bangkok, Thailand: A 35-year update.

    PubMed

    Songnuan, Wisuwat; Bunnag, Chaweewan; Soontrapa, Kitipong; Pacharn, Punchama; Wangthan, Unchalee; Siriwattanakul, Umaporn; Malainual, Nat

    2015-09-01

    Pollen allergy is a growing global health issue. While airborne pollen counts are reported daily in several countries, such information is lacking in Thailand. This study aimed to survey airborne pollens at five sites in Bangkok, comparing data with the previous study performed 35 years ago in 1980. Sample collection was done using the ROTOROD® sampler by exposing the rods for one hour each day twice a week from May 2012-April 2013. Overall, we found that the average pollen count was relatively high throughout the year, at an average of 242 grains/m3. The highest peak was found in September (700 grains/m3). Interestingly, we found that the pollen count was noticeably lower in 2012-2013 when compared to the 1980 study. We also observed the approximate shift of pollen peaks about one to two months earlier in the 2012-2013 study. However, the major groups of airborne pollens did not change significantly. Grass, sedge, amaranthus pollens and fern spores still dominated. The unidentified pollen group was the only group with a higher pollen count when compared to the previous study.

  9. Bullying Perpetration, Victimization, and Demographic Differences in College Students: A Review of the Literature.

    PubMed

    Lund, Emily M; Ross, Scott W

    2016-01-11

    Although bullying has been widely recognized as a serious issue in elementary and secondary school and in the workplace, little is known about the prevalence of bullying in postsecondary education. We conducted a comprehensive search of the peer-reviewed literature and found 14 studies that reported the prevalence of bullying perpetration, victimization, or both in college or university students. Prevalence estimates varied widely been studies, but on average about 20-25% of students reported noncyberbullying victimization during college and 10-15% reported cyberbullying victimization. Similarly, approximately 20% of students on average reported perpetrating noncyberbullying during college, with about 5% reporting cyber perpetration. Men were more likely to report perpetration, but no consistent gender differences in victimization were found. Few studies reported prevalence by sexual orientation or race/ethnicity, and none reported prevalence by disability status. Overall, these results indicate that bullying continues to be prevalent in postsecondary education, but more research needs to be conducted, particularly that which uses multiuniversity samples and examines demographic differences in prevalence rates. © The Author(s) 2016.

  10. Physiotherapy in patients with facial nerve paresis: description of outcomes.

    PubMed

    Beurskens, Carien H G; Heymans, Peter G

    2004-01-01

    The purpose of this study was to describe changes and stabilities of long-term sequelae of facial paresis in outpatients receiving mime therapy, a form of physiotherapy. Archived data of 155 patients with peripheral facial nerve paresis were analyzed. Main outcome measures were (1) impairments: facial symmetry in rest and during movements and synkineses; (2) disabilities: eating, drinking, and speaking; and (3) quality of life. Symmetry at rest improved significantly; the average severity of the asymmetry in all movements decreased. The number of synkineses increased for 3 out of 8 movements; however, the group average severities decreased for 6 movements; substantially fewer patients reported disabilities in eating, drinking, and speaking; and quality of life improved significantly. During a period of approximately 3 months, significant changes in many aspects of facial functioning were observed, the relative position of patients remaining stable over time. Observed changes occurred while the patients participated in a program for facial rehabilitation (mime therapy), replicating the randomized controlled trial-proven benefits of mime therapy in a more varied sample of outpatients.

  11. High-Frequency Measurements of Tree Methane Fluxes Indicate a Primary Souce Inside Tree Tissue

    NASA Astrophysics Data System (ADS)

    Brewer, P.; Megonigal, P.

    2017-12-01

    Methane emissions from the boles and shoots of living upland trees is a recent discovery with significant implications for methane budgets. Forest soil methane uptake is the greatest terrestrial methane sink, but studies have shown this may be partially for fully offset by tree methane sources. However, our ability to quantify the tree source has been hampered because the ultimate biological source(s) of methane is unclear. We measured methane fluxes from two species of living tree boles in an Eastern North American deciduous forest over 100 consecutive days. Our two hour sampling intervals allowed us to characterize diurnal patterns and seasonal dynamics. We observed wide intraspecific differences in average flux rates and diurnal dynamics, even between adjacent individuals. This and other properties of the fluxes indicates the primary methane source is likely within the tree tissues, not in soil or groundwater. Emissions of methane from trees offset approximately 10% of soil uptake on average, but at times tree fluxes were much higher. Preliminary analyses indicate the highest rates are related to tree life history, tree growth, temperature, ground-water depth, and soil moisture.

  12. Sources of non-fossil-fuel emissions in carbonaceous aerosols during early winter in Chinese cities

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, Jun; Cheng, Zhineng; Zhong, Guangcai; Zhu, Sanyuan; Ding, Ping; Shen, Chengde; Tian, Chongguo; Chen, Yingjun; Zhi, Guorui; Zhang, Gan

    2017-09-01

    China experiences frequent and severe haze outbreaks from the beginning of winter. Carbonaceous aerosols are regarded as an essential factor in controlling the formation and evolution of haze episodes. To elucidate the carbon sources of air pollution, source apportionment was conducted using radiocarbon (14C) and unique molecular organic tracers. Daily 24 h PM2. 5 samples were collected continuously from October 2013 to November 2013 in 10 Chinese cities. The 14C results indicated that non-fossil-fuel (NF) emissions were predominant in total carbon (TC; average = 65 ± 7 %). Approximately half of the EC was derived primarily from biomass burning (BB) (average = 46 ± 11 %), while over half of the organic carbon (OC) fraction comprised NF (average = 68 ± 7 %). On average, the largest contributor to TC was NF-derived secondary OC (SOCnf), which accounted for 46 ± 7 % of TC, followed by SOC derived from fossil fuels (FF) (SOCf; 16 ± 3 %), BB-derived primary OC (POCbb; 13 ± 5 %), POC derived from FF (POCf; 12 ± 3 %), EC derived from FF (ECf; 7 ± 2 %) and EC derived from BB (ECbb; 6 ± 2 %). The regional background carbonaceous aerosol composition was characterized by NF sources; POCs played a major role in northern China, while SOCs contributed more in other regions. However, during haze episodes, there were no dramatic changes in the carbon source or composition in the cities under study, but the contribution of POC from both FF and NF increased significantly.

  13. A rapid method for the sampling of atmospheric water vapour for isotopic analysis.

    PubMed

    Peters, Leon I; Yakir, Dan

    2010-01-01

    Analysis of the stable isotopic composition of atmospheric moisture is widely applied in the environmental sciences. Traditional methods for obtaining isotopic compositional data from ambient moisture have required complicated sampling procedures, expensive and sophisticated distillation lines, hazardous consumables, and lengthy treatments prior to analysis. Newer laser-based techniques are expensive and usually not suitable for large-scale field campaigns, especially in cases where access to mains power is not feasible or high spatial coverage is required. Here we outline the construction and usage of a novel vapour-sampling system based on a battery-operated Stirling cycle cooler, which is simple to operate, does not require any consumables, or post-collection distillation, and is light-weight and highly portable. We demonstrate the ability of this system to reproduce delta(18)O isotopic compositions of ambient water vapour, with samples taken simultaneously by a traditional cryogenic collection technique. Samples were collected over 1 h directly into autosampler vials and were analysed by mass spectrometry after pyrolysis of 1 microL aliquots to CO. This yielded an average error of < +/-0.5 per thousand, approximately equal to the signal-to-noise ratio of traditional approaches. This new system provides a rapid and reliable alternative to conventional cryogenic techniques, particularly in cases requiring high sample throughput or where access to distillation lines, slurry maintenance or mains power is not feasible. Copyright 2009 John Wiley & Sons, Ltd.

  14. Clinical and Economic Burden of Peristomal Skin Complications in Patients With Recent Ostomies

    PubMed Central

    Taneja, Charu; Netsch, Debra; Rolstad, Bonnie Sue; Inglese, Gary; Lamerato, Lois

    2017-01-01

    PURPOSE: The purpose of this study was to estimate the risk and economic burden of peristomal skin complications (PSCs) in a large integrated healthcare system in the Midwestern United States. DESIGN: Retrospective cohort study. SUBJECTS AND SETTING: The sample comprised 128 patients; 40% (n = 51) underwent colostomy, 50% (n = 64) underwent ileostomy, and 10% (n = 13) underwent urostomy. Their average age was 60.6 ± 15.6 years at the time of ostomy surgery. METHODS: Using administrative data, we retrospectively identified all patients who underwent colostomy, ileostomy, or urostomy between January 1, 2008, and November 30, 2012. Trained medical abstractors then reviewed the clinical records of these persons to identify those with evidence of PSC within 90 days of ostomy surgery. We then examined levels of healthcare utilization and costs over a 120-day period, beginning with date of surgery, for patients with and without PSC, respectively. Our analyses were principally descriptive in nature. RESULTS: The study cohort comprised 128 patients who underwent ostomy surgery (colostomy, n = 51 [40%]; ileostomy, n = 64 [50%]; urostomy, n = 13 [10%]). Approximately one-third (36.7%) had evidence of a PSC in the 90-day period following surgery (urinary diversion, 7.7%; colostomy, 35.3%; ileostomy, 43.8%). The average time from surgery to PSC was 23.7 ± 20.5 days (mean ± SD). Patients with PSC had index admissions that averaged 21.5 days versus 13.9 days for those without these complications. Corresponding rates of hospital readmission within the 120-day period following surgery were 47% versus 33%, respectively. Total healthcare costs over 120 days were almost $80,000 higher for patients with PSCs. CONCLUSIONS: Approximately one-third of ostomy patients over a 5-year study period had evidence of PSCs within 90 days of surgery. Costs of care were substantially higher for patients with these complications. PMID:28574928

  15. Improved consolidation of silicon carbide

    NASA Technical Reports Server (NTRS)

    Freedman, M. R.; Millard, M. L.

    1986-01-01

    Alpha silicon carbide powder was consolidated by both dry and wet methods. Dry pressing in a double acting steel die yielded sintered test bars with an average flexural strength of 235.6 MPa with a critical flaw size of approximately 100 micro m. An aqueous slurry pressing technique produced sintered test bars with an average flexural strength of 440.8 MPa with a critical flaw size of approximately 25 micro m. Image analysis revealed a reduction in both pore area and pore size distribution in the slurry pressed sintered test bars. The improvements in the slurry pressed material properties are discussed in terms of reduced agglomeration and improved particle packing during consolidation.

  16. Thin, porous metal sheets and methods for making the same

    DOEpatents

    Liu, Wei; Li, Xiaohong Shari; Canfield, Nathan L.

    2015-07-14

    Thin, porous metal sheets and methods for forming them are presented to enable a variety of applications and devices. The thin, porous metal sheets are less than or equal to approximately 200 .mu.m thick, have a porosity between 25% and 75% by volume, and have pores with an average diameter less than or equal to approximately 2 .mu.m. The thin, porous metal sheets can be fabricated by preparing a slurry having between 10 and 50 wt % solvent and between 20 and 80 wt % powder of a metal precursor. The average particle size in the metal precursor powder should be between 100 nm and 5 .mu.m.

  17. Brownian systems with spatially inhomogeneous activity

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Brader, J. M.

    2017-09-01

    We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.

  18. CHClF2 /F-22/ in the earth's atmosphere

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. A.; Khalil, M. A. K.; Penkett, S. A.; Prosser, N. J. D.

    1980-01-01

    Recent global measurements of CHClF2 (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O2-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, approximately 45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, approximately 30 pptv). This difference is probably due to underestimates of F-22 emissions.

  19. Partition resampling and extrapolation averaging: approximation methods for quantifying gene expression in large numbers of short oligonucleotide arrays.

    PubMed

    Goldstein, Darlene R

    2006-10-01

    Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.

  20. The effects of nucleation and solidification mechanisms on the microstructure and thermomechanical response of tin silver copper solder joints

    NASA Astrophysics Data System (ADS)

    Arfaei, Babak

    This work examines the nucleation mechanism of Sn in SnAgCu alloys and its effect on the microstructure of those solder joints. The nucleation rate of Sn in a SAC alloy was obtained by simultaneous calorimetric examination of the isothermal solidification of 88 flip chip Sn-Ag-Cu solder joints. Qualitative agreement with classic nucleation theory was observed, although it was concluded that the spherical cap model cannot be applied to explain the structure of nucleus. It was shown that the solidification temperature significantly affects the microstructure; samples that undercooled less than approximately 40oC revealed one or three large Sn grains, while interlaced twinning was observed in the samples that solidified at lower temperatures. In order to better understand the effect of microstructure on the thermomechanical properties of solder joints, a study of the dependence of room temperature shear fatigue lifetime on Sn grain number and orientation was conducted. This study examined the correlations of variations in fatigue life of solder balls with the microstructure of Sn-Ag-Cu solder. The mean fatigue lifetime was found to be significantly longer for samples with multiple Sn grains than for samples with single Sn grains. For single grain samples, correlations between Sn grain orientation (with respect to the loading direction) and lifetime were observed, providing insight on early failures in SnAgCu solder joints. Correlations between the lifetimes of single Sn grained, SAC205 solder joints with differences in Ag3Sn and Cu6Sn5 precipitate microstructures were investigated. It was found that Ag3Sn precipitates were highly segregated from Cu6Sn 5 precipitates on a length scale of approximately twenty microns. Furthermore, large (factor of two) variations of the Sn dendrite arm size were observed within given samples. Such variations in values of dendrite arm size within a single sample were much larger than observed variations of this parameter between individual samples. Few significant differences were observed in the average size of precipitates in different samples. While the earliest and latest lifetimes of single Sn grained samples were correlated with Sn grain orientation, effects of precipitate microstructure on lifetimes were not clearly delineated.

  1. Tracing ground-water movement by using the stable isotopes of oxygen and hydrogen, upper Penitencia Creek alluvial fan, Santa Clara Valley, California

    USGS Publications Warehouse

    Muir, K.S.; Coplen, Tyler B.

    1981-01-01

    Starting in 1965 the Santa Clara Valley Water District began importing about i00,000 acre-feet per year of northern California water. About one-half of this water was used to artificially recharge the Upper Penitencia Creek alluvial fan in Santa Clara Valley. In order to determine the relative amounts of local ground water and recharged imported water being pumped from the wells, stable isotopes of oxygen and hydrogen were used to trace the movement of the imported water in the alluvial fan. To trace the movement of imported water in the Upper Penitencia Creek alluvial fan, well samples were selected to give areal and depth coverage for the whole fan. The stable isotopes of oxygen-16, oxygen-18, and deuterium were measured in the water samples of imported water and from the wells and streams in the Santa Clara Valley. The d18oand dD compositions of the local runoff were about -6.00 o/oo (parts per thousand) and -40 o/oo, respectively; the average compositions for the local native ground-water samples were about -6.1 o/oo and -41 o/oo, respectively; and the average compositions of the imported water samples were -10.2 o/oo and -74 o/oo, respectively. (The oxygen isotopic composition of water samples is reported relative to Standard Mean Ocean Water, in parts per thousand.) The difference between local ground water and recharged imported water was about 4.1 o/oo in d18o and 33 o/oo in dL. The isotopic data indicate dilution of northern California water with local ground water in a downgradient direction. Two wells contain approximately 74 percent northern California water, six wells more than 50 percent. Data indicate that there may be a correlation between the percentage of northern California water and the depth or length of perforated intervals in wells.

  2. Multifractal magnetic susceptibility distribution models of hydrothermally altered rocks in the Needle Creek Igneous Center of the Absaroka Mountains, Wyoming

    USGS Publications Warehouse

    Gettings, M.E.

    2005-01-01

    Magnetic susceptibility was measured for 700 samples of drill core from thirteen drill holes in the porphyry copper-molybdenum deposit of the Stinkingwater mining district in the Absaroka Mountains, Wyoming. The magnetic susceptibility measurements, chemical analyses, and alteration class provided a database for study of magnetic susceptibility in these altered rocks. The distribution of the magnetic susceptibilities for all samples is multi-modal, with overlapping peaked distributions for samples in the propylitic and phyllic alteration class, a tail of higher susceptibilities for potassic alteration, and an approximately uniform distribution over a narrow range at the highest susceptibilities for unaltered rocks. Samples from all alteration and mineralization classes show susceptibilities across a wide range of values. Samples with secondary (supergene) alteration due to oxidation or enrichment show lower susceptibilities than primary (hypogene) alteration rock. Observed magnetic susceptibility variations and the monolithological character of the host rock suggest that the variations are due to varying degrees of alteration of blocks of rock between fractures that conducted hydrothermal fluids. Alteration of rock from the fractures inward progressively reduces the bulk magnetic susceptibility of the rock. The model introduced in this paper consists of a simulation of the fracture pattern and a simulation of the alteration of the rock between fractures. A multifractal model generated from multiplicative cascades with unequal ratios produces distributions statistically similar to the observed distributions. The reduction in susceptibility in the altered rocks was modelled as a diffusion process operating on the fracture distribution support. The average magnetic susceptibility was then computed for each block. For the purpose of comparing the model results with observation, the simulated magnetic susceptibilities were then averaged over the same interval as the measured data. Comparisons of the model and data from drillholes show good but not perfect agreement. ?? 2005 Author(s). This work is licensed under a Creative Commons License.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jing; Camardese, John; Shunmugasundaram, Ramesh

    Lithium-rich layered Ni–Mn–Co oxide materials have been intensely studied in the past decade. Mn-rich materials have serious voltage fade issues, and the Ni-rich materials have poor thermal stability and readily oxidize the organic carbonate electrolyte. Core–shell (CS) strategies that use Ni-rich material as the core and Mn-rich materials as the shell can balance the pros and cons of these materials in a hybrid system. The lithium-rich CS materials introduced here show much improved overall electrochemical performance compared to the core-only and shell-only samples. Energy dispersive spectroscopy results show that there was diffusion of transition metals between the core and shellmore » phases after sintering at 900 °C compared to the prepared hydroxide precursors. A Mn-rich shell was still maintained whereas the Co which was only in the shell in the precursor was approximately homogeneous throughout the particles. The CS samples with optimal lithium content showed low irreversible capacity (IRC), as well as high capacity and excellent capacity retention. Sample CS2-3 (the third sample in the 0.67Li 1+x(Ni₀.₆₇Mn₀.₃₃) 1–xO₂·0.33Li 1+y(Ni₀.₄Mn₀.₅Co₀.₁) 1–yO₂ CS2 series) had a reversible capacity of ~218 mAh/g with 12.3% (~30 mAh/g) irreversible capacity (IRC) and 98% capacity retention after 40 cycles to 4.6 V at 30 °C at a rate of ~C/20. Differential capacity versus potential (dQ/dV versus V) analysis confirmed that cells of the CS samples had stable impedance as well as a very stable average voltage. Apparently, the Mn-rich shell can effectively protect the Ni-rich core from reactions with the electrolyte while the Ni-rich core renders a high and stable average voltage.« less

  4. Molecular formulae of marine and terrigenous dissolved organic matter detected by electrospray ionization Fourier transform ion cyclotron resonance mass spectrometry

    NASA Astrophysics Data System (ADS)

    Koch, Boris P.; Witt, Matthias; Engbrodt, Ralph; Dittmar, Thorsten; Kattner, Gerhard

    2005-07-01

    The chemical structure of refractory marine dissolved organic matter (DOM) is still largely unknown. Electrospray ionization Fourier transform ion cyclotron resonance mass spectrometry (ESI FT-ICR-MS) was used to resolve the complex mixtures of DOM and provide valuable information on elemental compositions on a molecular scale. We characterized and compared DOM from two sharply contrasting aquatic environments, algal-derived DOM from the Weddell Sea (Antarctica) and terrigenous DOM from pore water of a tropical mangrove area in northern Brazil. Several thousand molecular formulas in the mass range of 300-600 Da were identified and reproduced in element ratio plots. On the basis of molecular elemental composition and double-bond equivalents (DBE) we calculated an average composition for marine DOM. O/C ratios in the marine samples were lower (0.36 ± 0.01) than in the mangrove pore-water sample (0.42). A small proportion of chemical formulas with higher molecular mass in the marine samples were characterized by very low O/C and H/C ratios probably reflecting amphiphilic properties. The average number of unsaturations in the marine samples was surprisingly high (DBE = 9.9; mangrove pore water: DBE = 9.4) most likely due to a significant contribution of carbonyl carbon. There was no significant difference in elemental composition between surface and deep-water DOM in the Weddell Sea. Although there were some molecules with unique marine elemental composition, there was a conspicuous degree of similarity between the terrigenous and algal-derived end members. Approximately one third of the molecular formulas were present in all marine as well as in the mangrove samples. We infer that different forms of microbial degradation ultimately lead to similar structural features that are intrinsically refractory, independent of the source of the organic matter and the environmental conditions where degradation took place.

  5. Concurrent Supermassive Black Hole and Galazy Growth: Linking Environment and Nuclear Activity in Zeta Equals 2.23 H Alpha Emitters

    NASA Technical Reports Server (NTRS)

    Lehmer, B. D.; Lucy, A. B.; Alexander, D. M.; Best, P. N.; Geach, J. E.; Harrison, C. M.; Hornschemeier, A. E.; Matsuda, Y.; Mullaney, J. R.; Smail, Ian; hide

    2013-01-01

    We present results from an approximately equal 100 ks Chandra observation of the 2QZ Cluster 1004+00 structure at z = 2.23 (hereafter 2QZ Clus). 2QZ Clus was originally identified as an overdensity of four optically-selected QSOs at z = 2.23 within a 15 × 15 arcmin square region. Narrow-band imaging in the near-IR (within the K band) revealed that the structure contains an additional overdensity of 22 z = 2.23 H alpha-emitting galaxies (HAEs), resulting in 23 unique z = 2.23 HAEs/QSOs (22 within the Chandra field of view). Our Chandra observations reveal that three HAEs in addition to the four QSOs harbor powerfully accreting supermassive black holes (SMBHs), with 2-10 keV luminosities of approximately equal (8-60) × 10(exp 43) erg s(exp-1) and X-ray spectral slopes consistent with unobscured active galactic nucleus (AGN). Using a large comparison sample of 210 z = 2.23 HAEs in the Chandra-COSMOS field (C-COSMOS), we find suggestive evidence that the AGN fraction increases with local HAE galaxy density. The 2QZ Clus HAEs reside in a moderately overdense environment (a factor of approximately equal 2 times over the field), and after excluding optically-selected QSOs, we find that the AGN fraction is a factor of approximately equal 3.5(+3.8/ -2.2) times higher than C-COSMOS HAEs in similar environments. Using stacking analyses of the Chandra data and Herschel SPIRE observations at 250micrometers, we respectively estimate mean SMBH accretion rates ( M(BH)) and star formation rates (SFRs) for the 2QZ Clus and C-COSMOS samples. We find that the mean 2QZ Clus HAE stacked X-ray luminosity is QSO-like (L(2-10 keV) approximately equal [6-10] × 10(exp 43) erg s(exp -1)), and the implied M(BH)/SFR approximately equal (1.6-3.2) × 10(exp -3) is broadly consistent with the local M(BH)/Stellar Mass relation and z approximately equal 2 X-ray selected AGN. In contrast, the C-COSMOS HAEs are on average an order of magnitude less X-ray luminous and have M(BH)/SFR approximately equal (0.2-0.4) × 10(exp -3), somewhat lower than the local MBH/M relation, but comparable to that found for z approximately equal 1-2 star-forming galaxies with similar mean X-ray luminosities. We estimate that a periodic QSO phase with duty cycle approximately 2%-8% would be sufficient to bring star-forming galaxies onto the local M(BH)/Stellar Mass relation. This duty cycle is broadly consistent with the observed C-COSMOS HAE AGN fraction (Approximately equal 0.4%-2.3%) for powerful AGN with LX approximately greater than 10(exp 44) erg s(exp -1). Future observations of 2QZ Clus will be needed to identify key factors responsible for driving the mutual growth of the SMBHs and galaxies.

  6. Average variograms to guide soil sampling

    NASA Astrophysics Data System (ADS)

    Kerry, R.; Oliver, M. A.

    2004-10-01

    To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.

  7. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  8. A rapid radiative transfer model for reflection of solar radiation

    NASA Technical Reports Server (NTRS)

    Xiang, X.; Smith, E. A.; Justus, C. G.

    1994-01-01

    A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.

  9. Typical performance of approximation algorithms for NP-hard problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-11-01

    Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.

  10. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    PubMed

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  11. Energy diffusion controlled reaction rate of reacting particle driven by broad-band noise

    NASA Astrophysics Data System (ADS)

    Deng, M. L.; Zhu, W. Q.

    2007-10-01

    The energy diffusion controlled reaction rate of a reacting particle with linear weak damping and broad-band noise excitation is studied by using the stochastic averaging method. First, the stochastic averaging method for strongly nonlinear oscillators under broad-band noise excitation using generalized harmonic functions is briefly introduced. Then, the reaction rate of the classical Kramers' reacting model with linear weak damping and broad-band noise excitation is investigated by using the stochastic averaging method. The averaged Itô stochastic differential equation describing the energy diffusion and the Pontryagin equation governing the mean first-passage time (MFPT) are established. The energy diffusion controlled reaction rate is obtained as the inverse of the MFPT by solving the Pontryagin equation. The results of two special cases of broad-band noises, i.e. the harmonic noise and the exponentially corrected noise, are discussed in details. It is demonstrated that the general expression of reaction rate derived by the authors can be reduced to the classical ones via linear approximation and high potential barrier approximation. The good agreement with the results of the Monte Carlo simulation verifies that the reaction rate can be well predicted using the stochastic averaging method.

  12. A new approximate sum rule for bulk alloy properties

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Ferrante, John

    1991-01-01

    A new, approximate sum rule is introduced for determining bulk properties of multicomponent systems, in terms of the pure components properties. This expression is applied for the study of lattice parameters, cohesive energies, and bulk moduli of binary alloys. The correct experimental trends (i.e., departure from average values) are predicted in all cases.

  13. Mode instability in one-dimensional anharmonic lattices: Variational equation approach

    NASA Astrophysics Data System (ADS)

    Yoshimura, K.

    1999-03-01

    The stability of normal mode oscillations has been studied in detail under the single-mode excitation condition for the Fermi-Pasta-Ulam-β lattice. Numerical experiments indicate that the mode stability depends strongly on k/N, where k is the wave number of the initially excited mode and N is the number of degrees of freedom in the system. It has been found that this feature does not change when N increases. We propose an average variational equation - approximate version of the variational equation - as a theoretical tool to facilitate a linear stability analysis. It is shown that this strong k/N dependence of the mode stability can be explained from the view point of the linear stability of the relevant orbits. We introduce a low-dimensional approximation of the average variational equation, which approximately describes the time evolution of variations in four normal mode amplitudes. The linear stability analysis based on this four-mode approximation demonstrates that the parametric instability mechanism plays a crucial role in the strong k/N dependence of the mode stability.

  14. A stochastic flow-capturing model to optimize the location of fast-charging stations with uncertain electric vehicle flows

    DOE PAGES

    Wu, Fei; Sioshansi, Ramteen

    2017-05-04

    Here, we develop a model to optimize the location of public fast charging stations for electric vehicles (EVs). A difficulty in planning the placement of charging stations is uncertainty in where EV charging demands appear. For this reason, we use a stochastic flow-capturing location model (SFCLM). A sample-average approximation method and an averaged two-replication procedure are used to solve the problem and estimate the solution quality. We demonstrate the use of the SFCLM using a Central-Ohio based case study. We find that most of the stations built are concentrated around the urban core of the region. As the number ofmore » stations built increases, some appear on the outskirts of the region to provide an extended charging network. We find that the sets of optimal charging station locations as a function of the number of stations built are approximately nested. We demonstrate the benefits of the charging-station network in terms of how many EVs are able to complete their daily trips by charging midday—six public charging stations allow at least 60% of EVs that would otherwise not be able to complete their daily tours without the stations to do so. We finally compare the SFCLM to a deterministic model, in which EV flows are set equal to their expected values. We show that if a limited number of charging stations are to be built, the SFCLM outperforms the deterministic model. As the number of stations to be built increases, the SFCLM and deterministic model select very similar station locations.« less

  15. Mass fluctuation kinetics: Capturing stochastic effects in systems of chemical reactions through coupled mean-variance computations

    NASA Astrophysics Data System (ADS)

    Gómez-Uribe, Carlos A.; Verghese, George C.

    2007-01-01

    The intrinsic stochastic effects in chemical reactions, and particularly in biochemical networks, may result in behaviors significantly different from those predicted by deterministic mass action kinetics (MAK). Analyzing stochastic effects, however, is often computationally taxing and complex. The authors describe here the derivation and application of what they term the mass fluctuation kinetics (MFK), a set of deterministic equations to track the means, variances, and covariances of the concentrations of the chemical species in the system. These equations are obtained by approximating the dynamics of the first and second moments of the chemical master equation. Apart from needing knowledge of the system volume, the MFK description requires only the same information used to specify the MAK model, and is not significantly harder to write down or apply. When the effects of fluctuations are negligible, the MFK description typically reduces to MAK. The MFK equations are capable of describing the average behavior of the network substantially better than MAK, because they incorporate the effects of fluctuations on the evolution of the means. They also account for the effects of the means on the evolution of the variances and covariances, to produce quite accurate uncertainty bands around the average behavior. The MFK computations, although approximate, are significantly faster than Monte Carlo methods for computing first and second moments in systems of chemical reactions. They may therefore be used, perhaps along with a few Monte Carlo simulations of sample state trajectories, to efficiently provide a detailed picture of the behavior of a chemical system.

  16. First Year Sedimentological Characteristics and Morphological Evolution of an Artificial Berm at Fort Myers Beach, Florida

    DTIC Science & Technology

    2011-06-17

    collected in the berm area. In the control areas, surface sediment samples were taken at approximately the toe of the dune (where present...In the berm area, surface sediment samples were taken at approximately the toe of the dune (where 29   present), backbeach, high tide line, mean...samples were taken at approximately the toe of the dune (where present), backbeach, high tide line, mean sea level, low tide line, 2 ft water depth

  17. Evaluation of Planning for Fish and Wildlife at Corps of Engineers Reservoirs, Allegheny Reservoir Project, Pennsylvania.

    DTIC Science & Technology

    1982-09-01

    hunters presently reside within known drawing dis- tance of the project area. To this number ay be added approximately 64,000 unlicensed children and...approximately 770,000 licensed fishermen and about 260,000 unlicensed children and retired adults who fish. Depending upon the quality of the project...Allegheny National Forest, USFS, porn . Comm., 1981). Average annual warnuater angling man-day use on Allegheny Lake was esti- mated at approximately 166,700

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogen, K.T.; Conrado, C.L.; Robison, W.L.

    A detailed analysis of uncertainty and interindividual variability in estimated doses was conducted for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island is treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Predicted doses were considered for the following fallout-related exposure pathways: ingested Cesium-137 and Strontium-90, external gamma exposure, and inhalation and ingestion of Americium-241 + Plutonium-239+240. Two dietary scenarios were considered: (1) imported foods are available (IA), and (2) imported foods aremore » unavailable (only local foods are consumed) (IUA). Corresponding calculations of uncertainty in estimated population-average dose showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to uncertainty in this dose are estimated to be approximately 2-fold higher and lower than its population-average value, respectively (under both IA and IUA assumptions). Corresponding calculations of interindividual variability in the expected value of dose with respect to uncertainty showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to interindividual variability in this dose are estimated to be approximately 2-fold higher and lower than its expected value, respectively (under both IA and IUA assumptions). For reference, the expected values of population-average dose at age 70 were estimated to be 1.6 and 5.2 cSv under the IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively.« less

  19. Canaveral National Seashore Water Quality and Aquatic Resource Inventory

    NASA Technical Reports Server (NTRS)

    Hall, C. R.; Provancha, J. A.; Oddy, D. M.; Lowers, R. L.; Drese, J. D.

    2001-01-01

    Mosquito Lagoon is a shallow, bar-built estuary located on the east central Florida Coast, primarily within the KSC boundary. The lagoon and watershed cover approximately 327 sq km (79422 acres) .The Lagoon occupies 159 sq km (37853 acres). Water depths average approximately 1m. The lagoon volume is approximately 1.6 x 10(exp 8)cu m. Water quality in Mosquito Lagoon is good. Salinity data typically range between 20 ppt and 35 ppt. The lowest value recorded was 4.5 ppt and the highest value was 37 ppt. Water temperatures fluctuate 2 - 3 C over a 24 h period. Cold front passage can rapidly alter water temperatures by 5 - 10 C or more in a short period of time. The highest temperature was 33.4 C and the lowest temperature was 8.8 C after a winter storm. Dissolved oxygen concentrations ranged from a low of 0.4 mg/l to a high of 15.3 mg/l. Extended periods of measurements below the Florida Department of Environmental Protection criteria of 4.0 mg/l were observed in fall and spring months suggesting high system respiration and oxygen demand. Metals such as antimony, arsenic, molybdenum and mercury were report as below detection limits for all samples. Cadmium, copper, chromium, silver, and zinc were found to be periodically above the Florida Department of Environmental Protection criteria for Class II and Class III surface waters.

  20. A quenchable superhard carbon phase synthesized by cold compression of carbon nanotubes.

    PubMed

    Wang, Zhongwu; Zhao, Yusheng; Tait, Kimberly; Liao, Xiaozhou; Schiferl, David; Zha, Changsheng; Downs, Robert T; Qian, Jiang; Zhu, Yuntian; Shen, Tongde

    2004-09-21

    A quenchable superhard high-pressure carbon phase was synthesized by cold compression of carbon nanotubes. Carbon nanotubes were placed in a diamond anvil cell, and x-ray diffraction measurements were conducted to pressures of approximately 100 GPa. A hexagonal carbon phase was formed at approximately 75 GPa and preserved at room conditions. X-ray and transmission electron microscopy electron diffraction, as well as Raman spectroscopy at ambient conditions, explicitly indicate that this phase is a sp(3)-rich hexagonal carbon polymorph, rather than hexagonal diamond. The cell parameters were refined to a(0) = 2.496(4) A, c(0) = 4.123(8) A, and V(0) = 22.24(7) A (3). There is a significant ratio of defects in this nonhomogeneous sample that contains regions with different stacking faults. In addition to the possibly existing amorphous carbon, an average density was estimated to be 3.6 +/- 0.2 g/cm(3), which is at least compatible to that of diamond (3.52 g/cm(3)). The bulk modulus was determined to be 447 GPa at fixed K' identical with 4, slightly greater than the reported value for diamond of approximately 440-442 GPa. An indented mark, along with radial cracks on the diamond anvils, demonstrates that this hexagonal carbon is a superhard material, at least comparable in hardness to cubic diamond.

Top