Sample records for typical values measured

  1. Alternative method of quantum state tomography toward a typical target via a weak-value measurement

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Dai, Hong-Yi; Yang, Le; Zhang, Ming

    2018-03-01

    There is usually a limitation of weak interaction on the application of weak-value measurement. This limitation dominates the performance of the quantum state tomography toward a typical target in the finite and high-dimensional complex-valued superposition of its basis states, especially when the compressive sensing technique is also employed. Here we propose an alternative method of quantum state tomography, presented as a general model, toward such typical target via weak-value measurement to overcome such limitation. In this model the pointer for the weak-value measurement is a qubit, and the target-pointer coupling interaction is no longer needed within the weak interaction limitation, meanwhile this interaction under the compressive sensing can be described with the Taylor series of the unitary evolution operator. The postselection state at the target is the equal superposition of all basis states, and the pointer readouts are gathered under multiple Pauli operator measurements. The reconstructed quantum state is generated from an optimization algorithm of total variation augmented Lagrangian alternating direction algorithm. Furthermore, we demonstrate an example of this general model for the quantum state tomography toward the planar laser-energy distribution and discuss the relations among some parameters at both our general model and the original first-order approximate model for this tomography.

  2. USE OF METHOD DETECTION LIMITS IN ENVIRONMENTAL MEASUREMENTS

    EPA Science Inventory

    Environmental measurements often produce values below the method detection limit (MDL). Because low or zero values may be used in determining compliance with regulatory limits, in determining emission factors (typical concentrations emitted by a given type of source), or in model...

  3. Asymptotic Equivalence of Probability Measures and Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Touchette, Hugo

    2018-03-01

    Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.

  4. The Physician Values in Practice Scale: Construction and Initial Validation

    ERIC Educational Resources Information Center

    Hartung, Paul J.; Taber, Brian J.; Richard, George V.

    2005-01-01

    Measures of values typically appraise the construct globally, across life domains or relative to a broad life domain such as work. We conducted two studies to construct and initially validate an occupation- and context-specific values measure. Study 1, based on a sample of 192 medical students, describes the initial construction and item analysis…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, J.D.; Ott, E.; Yorke, J.A.

    Dimension is perhaps the most basic property of an attractor. In this paper we discuss a variety of different definitions of dimension, compute their values for a typical example, and review previous work on the dimension of chaotic attractors. The relevant definitions of dimension are of two general types, those that depend only on metric properties, and those that depend on probabilistic properties (that is, they depend on the frequency with which a typical trajectory visits different regions of the attractor). Both our example and the previous work that we review support the conclusion that all of the probabilistic dimensionsmore » take on the same value, which we call the dimension of the natural measure, and all of the metric dimensions take on a common value, which we call the fractal dimension. Furthermore, the dimension of the natural measure is typically equal to the Lyapunov dimension, which is defined in terms of Lyapunov numbers, and thus is usually far easier to calculate than any other definition. Because it is computable and more physically relevant, we feel that the dimension of the natural measure is more important than the fractal dimension.« less

  6. Examining the reinforcing value of stimuli within social and non-social contexts in children with and without high-functioning autism.

    PubMed

    Goldberg, Melissa C; Allman, Melissa J; Hagopian, Louis P; Triggs, Mandy M; Frank-Crawford, Michelle A; Mostofsky, Stewart H; Denckla, Martha B; DeLeon, Iser G

    2017-10-01

    One of the key diagnostic criteria for autism spectrum disorder includes impairments in social interactions. This study compared the extent to which boys with high-functioning autism and typically developing boys "value" engaging in activities with a parent or alone. Two different assessments that can empirically determine the relative reinforcing value of social and non-social stimuli were employed: paired-choice preference assessments and progressive-ratio schedules. There were no significant differences between boys with high-functioning autism and typically developing boys on either measure. Moreover, there was a strong correspondence in performance across these two measures for participants in each group. These results suggest that the relative reinforcing value of engaging in activities with a primary caregiver is not diminished for children with autism spectrum disorder.

  7. Examining the reinforcing value of stimuli within social and non-social contexts in children with and without high-functioning autism

    PubMed Central

    Goldberg, Melissa C; Allman, Melissa J; Hagopian, Louis P; Triggs, Mandy M; Frank-Crawford, Michelle A; Mostofsky, Stewart H; Denckla, Martha B; DeLeon, Iser G

    2018-01-01

    One of the key diagnostic criteria for autism spectrum disorder includes impairments in social interactions. This study compared the extent to which boys with high-functioning autism and typically developing boys “value” engaging in activities with a parent or alone. Two different assessments that can empirically determine the relative reinforcing value of social and non-social stimuli were employed: paired-choice preference assessments and progressive-ratio schedules. There were no significant differences between boys with high-functioning autism and typically developing boys on either measure. Moreover, there was a strong correspondence in performance across these two measures for participants in each group. These results suggest that the relative reinforcing value of engaging in activities with a primary caregiver is not diminished for children with autism spectrum disorder. PMID:27368350

  8. Solar UV radiation exposure of seamen - Measurements, calibration and model calculations of erythemal irradiance along ship routes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feister, Uwe; Meyer, Gabriele; Kirst, Ulrich

    2013-05-10

    Seamen working on vessels that go along tropical and subtropical routes are at risk to receive high doses of solar erythemal radiation. Due to small solar zenith angles and low ozone values, UV index and erythemal dose are much higher than at mid-and high latitudes. UV index values at tropical and subtropical Oceans can exceed UVI = 20, which is more than double of typical mid-latitude UV index values. Daily erythemal dose can exceed the 30-fold of typical midlatitude winter values. Measurements of erythemal exposure of different body parts on seamen have been performed along 4 routes of merchant vessels.more » The data base has been extended by two years of continuous solar irradiance measurements taken on the mast top of RV METEOR. Radiative transfer model calculations for clear sky along the ship routes have been performed that use satellite-based input for ozone and aerosols to provide maximum erythemal irradiance and dose. The whole data base is intended to be used to derive individual erythemal exposure of seamen during work-time.« less

  9. Teacher Evaluations: Use or Misuse?

    ERIC Educational Resources Information Center

    Warring, Douglas F.

    2015-01-01

    This manuscript examines value added measures used in teacher evaluations. The evaluations are often based on limited observations and use student growth as measured by standardized tests. These measures typically do not use multiple measures or consider other factors in the teaching and learning process. This manuscript identifies some of the…

  10. Characterization of SWIR cameras by MRC measurements

    NASA Astrophysics Data System (ADS)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.

  11. Diamagnetic Corrections and Pascal's Constants

    ERIC Educational Resources Information Center

    Bain, Gordon A.; Berry, John F.

    2008-01-01

    Measured magnetic susceptibilities of paramagnetic substances must typically be corrected for their underlying diamagnetism. This correction is often accomplished by using tabulated values for the diamagnetism of atoms, ions, or whole molecules. These tabulated values can be problematic since many sources contain incomplete and conflicting data.…

  12. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... measurement (such as a scale, balance, or mass comparator) at the inlet to the fuel-measurement system. Use a... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... (9) Mass. For linearity verification for gravimetric PM balances, use external calibration weights...

  13. Potassium dichromate method of coal gasification the study of the typical organic compounds in water

    NASA Astrophysics Data System (ADS)

    Quan, Jiankang; Qu, Guangfei; Dong, Zhanneng; Lu, Pei; Cai, Yingying; Wang, Shibo

    2017-05-01

    The national standard method is adopted in this paper the water - digestion spectrophotometry for determination of the chemical oxygen demand (COD), after ultrasonic processing of coal gasification water for CODCr measurement. Using the control variable method, measured in different solution pH, ultrasonic frequency, ultrasonic power, reaction conditions of different initial solution concentration, the change of coal gasification water CODCr value under the action of ultrasonic, the experimental results shows that appear when measurement is allowed to fluctuate, data, in order to explain the phenomenon we adopt the combination of the high performance liquid chromatography and mass spectrometry before and after ultrasonic coal gasification qualitative analysis on composition of organic matter in water. To raw water sample chromatography - mass spectrometry (GC/MS) analysis, combined with the spectra analysis of each peak stands for material, select coal gasification typical organic substances in water, with the method of single digestion, the equivalent CODCr values measured after digestion. Order to produce, coal gasification water contained high concentration organic wastewater, such as the national standard method is adopted to eliminate the organic material, therefore to measure the CODCr value is lower than actual CODCr value of the emergence of the phenomenon, the experiment of the effect of ultrasound [9-13] is promote the complex organic chain rupture, also explains the actual measurement data fluctuation phenomenon in the experiment.

  14. Collisional Shift and Broadening of Iodine Spectral Lines in Air Near 543 nm

    NASA Technical Reports Server (NTRS)

    Fletcher, D. G.; McDaniel, J. C.

    1995-01-01

    The collisional processes that influence the absorption of monochromatic light by iodine in air have been investigated. Measurements were made in both a static cell and an underexpanded jet flow over the range of properties encountered in typical compressible-flow aerodynamic applications. Experimentally measured values of the collisional shift and broadening coefficients were 0.058 +/- 0.004 and 0.53 +/- 0.010 GHz K(exp 0.7)/torr, respectively. The measured shift value showed reasonable agreement with theoretical calculations based on Lindholm-Foley collisional theory for a simple dispersive potential. The measured collisional broadening showed less favorable agreement with the calculated value.

  15. [Verified maximum admissible intensity (MAI) values for the ultrasonic noise in work environment].

    PubMed

    Pawlaczyk-Łuszcyńska, M; Koton, J; Augustyńska, D; Sliwińska-Kowalska, M; Kameduła, M

    2001-01-01

    The measurement methods and occupational exposure limits for ultrasonic noise (airborne ultrasound) are described. Typical sources of ultrasonic noise and sound pressure levels measured at workplaces are discussed. The verified Polish regulations on maximum admissible intensity (MAI) values for ultrasonic noise in the work environment and proposals of exposure limits for workers at particular risk (i.e. pregnant women and juveniles) are presented.

  16. Initial study of deep inelastic scattering with ZEUS at HERA

    NASA Astrophysics Data System (ADS)

    Derrick, M.; Krakauer, D.; Magill, S.; Musgrave, B.; Repond, J.; Repond, S.; Stanek, R.; Talaga, R. L.; Thron, J.; Arzarello, F.; Ayad, R.; Barbagli, G.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, A.; Bruni, G.; Bruni, P.; Cara Romeo, G.; Castellini, G.; Chiarini, M.; Cifarelli, L.; Cindolo, F.; Ciralli, F.; Contin, A.; D'Auria, S.; del Papa, C.; Frasconi, F.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Lin, Q.; Lisowski, B.; Maccarrone, G.; Margotti, A.; Massam, T.; Nania, R.; Nemoz, C.; Palmonari, F.; Sartorelli, G.; Timellini, R.; Zamora Garcia, Y.; Zichichi, A.; Bargende, A.; Crittenden, J.; Dabbous, H.; Desch, K.; Diekmann, B.; Doeker, T.; Geerts, M.; Geitz, G.; Gutjahr, B.; Hartmann, H.; Hartmann, J.; Haun, D.; Heinloth, K.; Hilger, E.; Jakob, H.-P.; Kramarczyk, S.; Kückes, M.; Mass, A.; Mengel, S.; Mollen, J.; Monaldi, D.; Müsch, H.; Paul, E.; Schattevoy, R.; Schneider, J.-L.; Wedemeyer, R.; Cassidy, A.; Cussans, D. G.; Dyce, N.; Fawcett, H. F.; Foster, B.; Gilmore, R.; Heath, G. P.; Lancaster, M.; Llewellyn, T. J.; Malos, J.; Morgado, C. J. S.; Tapper, R. J.; Wilson, S. S.; Rau, R. R.; Barillari, T.; Schioppa, M.; Susinno, G.; Bernstein, A.; Caldwell, A.; Gialas, I.; Parsons, J. A.; Ritz, S.; Sciulli, F.; Straub, P. B.; Wai, L.; Yang, S.; Burkot, W.; Eskreys, A.; Piotrzkowski, K.; Zachara, M.; Zawiejski, L.; Borzemski, P.; Jeleń, K.; Kisielewska, D.; Kowalski, T.; Rulikowska-Zerȩbska, E.; Suszycki, L.; Zajc, J.; Kȩdzierski, T.; Kotański, A.; Przybycień, M.; Bauerdick, L. A. T.; Behrens, U.; Bienlein, J. K.; Coldewey, C.; Dannemann, A.; Dierks, K.; Dorth, W.; Drews, G.; Erhard, P.; Flasiński, M.; Fleck, I.; Fürtjes, A.; Gläser, R.; Göttlicher, P.; Hass, T.; Hagge, L.; Hain, W.; Hasell, D.; Hultschig, H.; Jahnen, G.; Joos, P.; Kasemann, M.; Klanner, R.; Koch, W.; Kötz, U.; Kowalski, H.; Labs, J.; Ladage, A.; Löhr, B.; Lüke, D.; Mainusch, J.; Manczak, O.; Momayezi, M.; Ng, J. S. T.; Nicel, S.; Notz, D.; Park, I. H.; Pösnecker, K.-U.; Rohde, M.; Ros, E.; Schneekloth, S.; Schroeder, J.; Schulz, W.; Selonke, F.; Stiliaris, E.; Tscheslog, E.; Tsurugai, T.; Turkot, F.; Vogel, W.; Woeniger, T.; Wolf, G.; Youngman, C.; Grabosch, H. J.; Leich, A.; Meyer, A.; Rethfeldt, C.; Schlensthdt, S.; Casalbuoni, R.; de Curtis, S.; Dominici, D.; Francescato, A.; Nuti, M.; Pelfer, P.; Anzivino, G.; Casaccia, R.; de Pasquale, S.; Qian, S.; Votano, L.; Bamberger, A.; Freidhof, A.; Poser, T.; Söldner-Rembold, S.; Theisen, G.; Trefzger, T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; Forbes, J. R.; Jamieson, V. A.; Raine, C.; Saxon, D. H.; Brückmann, H.; Gloth, G.; Holm, U.; Kammerdocher, H.; Krebs, B.; Neumann, T.; Wick, K.; Hofmann, A.; Kröger, W.; Krüger, J.; Lohrmann, E.; Milewski, J.; Nakahata, M.; Pavel, N.; Poelz, G.; Salomon, R.; Seidman, A.; Schott, W.; Wiik, B. H.; Zetsche, F.; Bacon, T. C.; Butterworth, I.; Markou, C.; McQuillan, D.; Miller, D. B.; Mobayyen, M. M.; Prinias, A.; Vorvolakos, A.; Bienz, T.; Kreutzmann, H.; Mallik, U.; McCliment, E.; Roco, M.; Wang, M. Z.; Cloth, P.; Filges, D.; Chen, L.; Imlay, R.; Kartik, S.; Kim, H.-J.; McNeil, R. R.; Metcalf, W.; Barreiro, F.; Cases, G.; Hervás, L.; Labarga, L.; del Peso, J.; Roldán, J.; Terrón, J.; de Trocóniz, J. F.; Ikraiam, F.; Mayer, J. K.; Smith, G. R.; Corriveau, F.; Gilkinson, D. J.; Hanna, D. S.; Hung, L. W.; Mitchell, J. W.; Patel, P. M.; Sinclair, L. E.; Stairs, D. G.; Ullmann, R.; Bashindzhagyan, G. L.; Ermolov, P. F.; Golubkov, Y. A.; Kuzmin, V. A.; Kuznetsov, E. N.; Savin, A. A.; Voronin, A. G.; Zotov, N. P.; Bentvelsen, S.; Dake, A.; Engelen, J.; de Jong, P.; de Jong, S.; de Kamps, M.; Kooijman, P.; Kruse, A.; van der Lugt, H.; O'dell, V.; Straver, J.; Tenner, A.; Tiecke, H.; Uijterwaal, H.; Vermeulen, J.; Wiggers, L.; de Wolf, E.; van Woudenberg, R.; Yoshida, R.; Bylsma, B.; Durkin, L. S.; Li, C.; Ling, T. Y.; McLean, K. W.; Murray, W. N.; Park, S. K.; Romanowski, T. A.; Seidlein, R.; Blair, G. A.; Butterworth, J. M.; Byrne, A.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Devenish, R. C. E.; Gingrich, D. M.; Hallam-Baker, P. M.; Harnew, N.; Khatri, T.; Long, K. R.; Luffman, P.; McArthur, I.; Morawitz, P.; Nash, J.; Smith, S. J. P.; Roocroft, N. C.; Wilson, F. F.; Abbiendi, G.; Brugnera, R.; Carlin, R.; dal Corso, F.; de Giorgi, M.; Dosselli, U.; Gasparini, F.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Field, G.; Lim, J. N.; Oh, B. Y.; Whitmore, J.; Contino, U.; D'Agostini, G.; Guida, M.; Iori, M.; Mari, S. M.; Marini, G.; Mattioli, M.; Nigro, A.; Hart, J. C.; McCubbin, N. A.; Shah, T. P.; Short, T. L.; Barberis, E.; Cartiglia, N.; Heusch, C.; Hubbard, B.; Leslie, J.; O'Shaughnessy, K.; Sadrozinski, H. F.; Seiden, A.; Badura, E.; Biltzinger, J.; Chaves, H.; Rost, M.; Seifert, R. J.; Walenta, A. H.; Weihs, W.; Zech, G.; Dagan, S.; Levy, A.; Zer-Zion, D.; Hasegawa, T.; Hazumi, M.; Ishii, T.; Kasai, S.; Kuze, M.; Nagasawa, Y.; Nakao, M.; Okuno, H.; Tokushuku, K.; Watanabe, T.; Yamada, S.; Chiba, M.; Hamatsu, R.; Hirose, T.; Kitamura, S.; Nagayama, S.; Nakamitsu, Y.; Arneodo, M.; Costa, M.; Ferrero, M. I.; Lamberti, L.; Maselli, S.; Peroni, C.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Bandyopadhyay, D.; Benard, F.; Bhadra, S.; Brkic, M.; Burow, B. D.; Chlebana, F. S.; Crombie, M. B.; Hartner, G. F.; Levman, G. M.; Martin, J. F.; Orr, R. S.; Prentice, J. D.; Sampson, C. R.; Stairs, G. G.; Teuscher, R. J.; Yoon, T.-S.; Bullock, F. W.; Catterall, C. D.; Giddings, J. C.; Jones, T. W.; Khan, A. M.; Lane, J. B.; Makkar, P. L.; Shaw, D.; Shulman, J.; Blankenship, K.; Gibaut, D. B.; Kochocki, J.; Lu, B.; Mo, L. W.; Charchula, K.; Ciborowski, J.; Gajewski, J.; Grzelak, G.; Kasprzak, M.; Krzyżanowski, M.; Muchorowski, K.; Nowak, R. J.; Pawlak, J. M.; Stojda, K.; Stopczyński, A.; Szwed, R.; Tymieniecka, T.; Walczak, R.; Wróblewski, A. K.; Zakrzewski, J. A.; Zarnecki, A. F.; Adamus, M.; Abramowicz, H.; Eisenberg, Y.; Glasman, C.; Karshon, U.; Montag, A.; Revel, D.; Shapira, A.; Ali, I.; Behrens, B.; Camerini, U.; Dasu, S.; Fordham, C.; Foudas, C.; Goussiou, A.; Lomperski, M.; Loveless, R. J.; Nylander, P.; Ptacek, M.; Reeder, D. D.; Smith, W. H.; Silverstein, S.; Frisken, W. R.; Furutani, K. M.; Iga, Y.

    1993-04-01

    Results are presented on neutral current, deep inelastic scattering measured in collisions of 26.7 GeV electrons and 820 GeV protons. The events typically populate a range in Q2 from 10 to 100 GeV2. The values of x extend down to x ~ 10-4 which is two orders of magnitude lower than previously measured at such Q2 values in fixed target experiments. The measured cross sections are in accord with the extrapolations of current parametrisations of parton distributions.

  17. Variability in δ13C values between individual Daphnia ephippia: Implications for palaeo-studies

    NASA Astrophysics Data System (ADS)

    Schilder, Jos; van Roij, Linda; Reichart, Gert-Jan; Sluijs, Appy; Heiri, Oliver

    2018-06-01

    The stable carbon isotope ratio (δ13C value) of Daphnia spp. resting egg shells (ephippia) provides information on past changes in Daphnia diet. Measurements are typically performed on samples of ≥20 ephippia, which obscures the range of values associated with individual ephippia. Using a recently developed laser ablation-based technique, we perform multiple δ13C analyses on individual ephippia, which show a high degree of reproducibility (standard deviations 0.1-0.5‰). We further measured δ13C values of 13 ephippia from surface sediments of three Swiss lakes. In the well-oxygenated lake with low methane concentrations, δ13C values are close to values typical for algae (-31.4‰) and the range in values is relatively small (5.8‰). This variability is likely driven by seasonal (or inter-annual) variability in algae δ13C values. In two seasonally anoxic lakes with higher methane concentrations, average values were lower (-41.4 and -43.9‰, respectively) and the ranges much larger (10.7 and 20.0‰). We attribute this variability to seasonal variation in incorporation of methane-derived carbon. In one lake we identify two statistically distinct isotopic populations, which may reflect separate production peaks. The potentially large within-sample variability should be considered when interpreting small-amplitude, short-lived isotope excursions based on samples consisting of few ephippia. We show that measurements on single ephippia can be performed using laser ablation, which allows for refined assessments of past Daphnia diet and carbon cycling in lake food webs. Furthermore, our study provides a basis for similar measurements on other chitinous remains (e.g. from chironomids, bryozoans).

  18. Prediction and typicality in multiverse cosmology

    NASA Astrophysics Data System (ADS)

    Azhar, Feraz

    2014-02-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.

  19. Pressure and Thrust Measurements of a High-Frequency Pulsed-Detonation Actuator

    NASA Technical Reports Server (NTRS)

    Nguyen, Namtran C.; Cutler, Andrew D.

    2008-01-01

    This paper describes the development of a small-scale, high-frequency pulsed detonation actuator. The device utilized a fuel mixture of H2 and air, which was injected into the device at frequencies of up to 1200 Hz. Pulsed detonations were demonstrated in an 8-inch long combustion volume, at approx.600 Hz, for the lambda/4 mode. The primary objective of this experiment was to measure the generated thrust. A mean value of thrust was measured up to 6.0 lb, corresponding to specific impulse of 2611 s. This value is comparable to other H2-fueled pulsed detonation engines (PDEs) experiments. The injection and detonation frequency for this new experimental case was approx.600 Hz, and was much higher than typical PDEs, where frequencies are usually less than 100 Hz. The compact size of the model and high frequency of detonation yields a thrust-per-unit-volume of approximately 2.0 lb/cu in, and compares favorably with other experiments, which typically have thrust-per-unit-volume values of approximately 0.01 lb/cu in.

  20. Impact of El Niño and La Niña on SeaWiFS, MODIS-A and VIIRS Chlorophyll-a Measurements Along the Equator During 1997 to 2016

    NASA Astrophysics Data System (ADS)

    Halpern, D.; Franz, B. A.; Kuring, N. A.

    2016-12-01

    The Ocean Biology Processing Group at NASA's GSFC recently reprocessed satellite ocean color measurements (SeaWiFS, MODIS-A and VIIRS) to improve accuracy and enhance time-series interoperability and consistency between multi-mission datasets. We chose the 1°S-1°N region along the equator to examine the behavior of Chl-a in El Niño and La Niña events because this latitudinal width represented the scale of Ekman upwelling, which is hypothesized to be a primary mechanism of Chl-a variations along the equator. An El Niño (La Niña) event has five consecutive 3-month-average sea surface temperature anomalies (SSTAs) greater (less) than 0.5°C in the 5°S-5°N, 170°W-120°W region and a super El Niño event occurs when SSTA is greater than 2.0°C. The September 1997 (onset of SeaWiFS data) to July 2016 period contained two super El Niño events, four typical El Niño events and four La Niña events. In the equatorial Pacific Ocean from 135°E (longitude of the westernmost data) to 150°E, the average typical El Niño and La Niña values were approximately the same (0.13 mg m-3). From 150°E to 165°W, the approximate bowl-shaped longitudinal pattern of Chl-a data in the average typical El Niño reached minimum (0.08 mg m-3) at 170°E and then increased to a relatively uniform value of 0.20 mg m-3 from 160°W to the Galapagos, where Chl-a reached 0.45 mg m-3. Eastward from 150°E, Chl-a values in the average typical La Niña increased approximately linearly to 0.21 mg m-3 at 170°E, where Chl-a was 175% larger than that in the average typical El Niño. Chl-a values in the average typical La Niña were approximately 0.22 mg m-3 until the Galapagos, where values reached 0.55 mg m-3. Average Chl-a values in the super El Niño event in 2015-2016 were similar to those associated with the average typical El Niño, but the bottom of the bowl-shaped pattern was shallower and wider. However, the longitudinal pattern of Chl-a in the super El Niño of 1997-1998 differed significantly from the patterns of the average typical El Niño and super El Niño of 2015-2016. Also, Chl-a distributions in the Atlantic and Indian oceans will be described. Correlations between satellite surface wind vector measurements and Chl-a in El Niño and La Niña were not always consistent with the hypothesis of the important contribution of Ekman upwelling and will be discussed.

  1. Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19

    ERIC Educational Resources Information Center

    Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2008-01-01

    Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…

  2. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... meter at different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... For linearity verification for gravimetric PM balances, use external calibration weights that that...

  3. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... meter at different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... For linearity verification for gravimetric PM balances, use external calibration weights that that...

  4. Lithostratigraphy and shear-wave velocity in the crystallized Topopah Spring Tuff, Yucca Mountain, Nevada

    USGS Publications Warehouse

    Buesch, D.C.; Stokoe, K.H.; Won, K.C.; Seong, Y.J.; Jung, J.L.; Schuhen, M.D.

    2006-01-01

    Evaluation of the potential future response to seismic events of the proposed spent nuclear fuel and high-level radioactive waste repository at Yucca Mountain, Nevada, is in part based on the seismic properties of the host rock, the 12.8-million-year-old Topopah Spring Tuff. Because of the processes that formed the tuff, the densely welded and crystallized part has three lithophysal and three nonlithophysal zones, and each zone has characteristic variations in lithostratigraphic features and structures of the rocks. Lithostratigraphic features include lithophysal cavities; rims on lithophysae and some fractures; spots (which are similar to rims but without an associated cavity or aperture); amounts of porosity resulting from welding, crystallization, and vapor-phase corrosion and mineralization; and fractures. Seismic properties, including shear-wave velocity (Vs), have been measured on 38 pieces of core, and there is a good "first order" correlation with the lithostratigraphic zones; for example, samples from nonlithophysal zones have larger Vs values compared to samples from lithophysal zones. Some samples have Vs values that are outside the typical range for the lithostratigraphic zone; however, these samples typically have one or more fractures, "large" lithophysal cavities, or "missing pieces" relative to the sample size. Shear-wave velocity data measured in the tunnels have similar relations to lithophysal and nonlithophysal rocks; however, tunnel-based values are typically smaller than those measured in core resulting from increased lithophysae and fracturing effects. Variations in seismic properties such as Vs data from small-scale samples (typical and "flawed" core) to larger scale transects in the tunnels provide a basis for merging our understanding of the distributions of lithostratigraphic features (and zones) with a method to scale seismic properties.

  5. Pressure and Thrust Measurements of a High-Frequency Pulsed Detonation Tube

    NASA Technical Reports Server (NTRS)

    Nguyen, N.; Cutler, A. D.

    2008-01-01

    This paper describes measurements of a small-scale, high-frequency pulsed detonation tube. The device utilized a mixture of H2 fuel and air, which was injected into the device at frequencies of up to 1200 Hz. Pulsed detonations were demonstrated in an 8-inch long combustion volume, at about 600 Hz, for the quarter wave mode of resonance. The primary objective of this experiment was to measure the generated thrust. A mean value of thrust was measured up to 6.0 lb, corresponding to H2 flow based specific impulse of 2970 s. This value is comparable to measurements in H2-fueled pulsed detonation engines (PDEs). The injection and detonation frequency for this new experimental case was much higher than typical PDEs, where frequencies are usually less than 100 Hz. The compact size of the device and high frequency of detonation yields a thrust-per-unit-volume of approximately 2.0 pounds per cubic inch, and compares favorably with other experiments, which typically have thrust-per-unit-volume of order 0.01 pound per cubic inch. This much higher volumetric efficiency results in a potentially much more practical device than the typical PDE, for a wide range of potential applications, including high-speed boundary layer separation control, for example in hypersonic engine inlets, and propulsion for small aircraft and missiles.

  6. Improving Recreational Water Quality Assessments Through Novel Approaches to Quantifying Measurement Uncertainty

    EPA Science Inventory

    Bacteriological water quality in the Great Lakes is typically measured by the concentration of fecal indicator bacteria (FIB), and is reported via most probable number (MPN) or colony forming unit (CFU) values derived from algorithms relating \\raw data" in a FIB analysis procedu...

  7. Cerebral spinal fluid (CSF) collection

    MedlinePlus

    ... establish the diagnosis of normal pressure hydrocephalus. Normal Results Normal values typically range as follows: Pressure: 70 ... measurements or may test different specimens. What Abnormal Results Mean If the CSF looks cloudy, it could ...

  8. Using Alternative Student Growth Measures for Evaluating Teacher Performance: What the Literature Says. REL 2013-002

    ERIC Educational Resources Information Center

    Gill, Brian; Bruch, Julie; Booker, Kevin

    2013-01-01

    States are increasingly interested in including measures of student achievement growth, or "value- added," in evaluating teachers. Annual state assessments, however, which are the typical measure of student growth, usually cover only reading and math teachers and only in grades 4-8. These state assessments thus cannot …

  9. Measurement of multiple scattering of 13 and 20 MeV electrons by thin foils

    PubMed Central

    Ross, C. K.; McEwen, M. R.; McDonald, A. F.; Cojocaru, C. D.; Faddegon, B. A.

    2008-01-01

    To model the transport of electrons through material requires knowledge of how the electrons lose energy and scatter. Theoretical models are used to describe electron energy loss and scatter and these models are supported by a limited amount of measured data. The purpose of this work was to obtain additional data that can be used to test models of electron scattering. Measurements were carried out using 13 and 20 MeV pencil beams of electrons produced by the National Research Council of Canada research accelerator. The electron fluence was measured at several angular positions from 0° to 9° for scattering foils of different thicknesses and with atomic numbers ranging from 4 to 79. The angle, θ1∕e, at which the fluence has decreased to 1∕e of its value on the central axis was used to characterize the distributions. Measured values of θ1∕e ranged from 1.5° to 8° with a typical uncertainty of about 1%. Distributions calculated using the EGSnrc Monte Carlo code were compared to the measured distributions. In general, the calculated distributions are narrower than the measured ones. Typically, the difference between the measured and calculated values of θ1∕e is about 1.5%, with the maximum difference being 4%. The measured and calculated distributions are related through a simple scaling of the angle, indicating that they have the same shape. No significant trends with atomic number were observed. PMID:18841865

  10. First Equals Most Important? Order Effects in Vignette-Based Measurement

    ERIC Educational Resources Information Center

    Auspurg, Katrin; Jäckle, Annette

    2017-01-01

    To measure what determines people's attitudes, definitions, or decisions, surveys increasingly ask respondents to judge vignettes. A vignette typically describes a hypothetical situation or object as having various attributes (dimensions). In factorial surveys, the values (levels) of dimensions are experimentally varied, so that their impact on…

  11. Physical Processes Controlling the Spatial Distributions of Relative Humidity in the Tropical Tropopause Layer Over the Pacific

    NASA Technical Reports Server (NTRS)

    Jensen, Eric J.; Thornberry, Troy D.; Rollins, Andrew W.; Ueyama, Rei; Pfister, Leonhard; Bui, Thaopaul; Diskin, Glenn S.; Digangi, Joshua P.; Hintsa, Eric; Gao, Ru-Shan; hide

    2017-01-01

    The spatial distribution of relative humidity with respect to ice (RHI) in the boreal wintertime tropical tropopause layer (TTL, is asymptotically Equal to 14-18 km) over the Pacific is examined with the measurements provided by the NASA Airborne Tropical TRopopause EXperiment. We also compare the measured RHI distributions with results from a transport and microphysical model driven by meteorological analysis fields. Notable features in the distribution of RHI versus temperature and longitude include (1) the common occurrence of RHI values near ice saturation over the western Pacific in the lower to middle TTL; (2) low RHI values in the lower TTL over the central and eastern Pacific; (3) common occurrence of RHI values following a constant mixing ratio in the middle to upper TTL (temperatures between 190 and 200 K); (4) RHI values typically near ice saturation in the coldest airmasses sampled; and (5) RHI values typically near 100% across the TTL temperature range in air parcels with ozone mixing ratios less than 50 ppbv. We suggest that the typically saturated air in the lower TTL over the western Pacific is likely driven by a combination of the frequent occurrence of deep convection and the predominance of rising motion in this region. The nearly constant water vapor mixing ratios in the middle to upper TTL likely result from the combination of slow ascent (resulting in long residence times) and wave-driven temperature variability. The numerical simulations generally reproduce the observed RHI distribution features, and sensitivity tests further emphasize the strong influence of convective input and vertical motions on TTL relative humidity.

  12. Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface

    PubMed Central

    Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A.; Castellanos-Ramos, Julián; Hidalgo-López, José A.

    2016-01-01

    Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor. PMID:26840321

  13. Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface.

    PubMed

    Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A; Castellanos-Ramos, Julián; Hidalgo-López, José A

    2016-02-01

    Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor.

  14. SAM 2 measurements of the polar stratospheric aerosol. Volume 9: October 1982 - April 1983

    NASA Technical Reports Server (NTRS)

    Mcmaster, L. R.; Powell, K. A.

    1991-01-01

    The Stratospheric Aerosol Measurement (SAM) II sensor aboard Nimbus 7 is providing 1.0 micron extinction measurements of Antarctic and Arctic stratospheric aerosols with a vertical resolution of 1 km. Representative examples and weekly averages including corresponding temperature profiles provided by NOAA for the time and place of each SAM II measurement are presented. Contours of aerosol extinction as a function of altitude and longitude or time are plotted, and aerosol optical depths are calculated for each week. Typical values of aerosol extinction and stratospheric optical depth in the Arctic are unusually large due to the presence of material from the El Chichon volcano eruption in the Spring of 1982. For example, the optical depth peaked at 0.068, more than 50 times background values. Typical values of aerosol extinction and stratospheric optical depth in the Antarctic varied considerably during this period due to the transport and arrival of the material from the El Chichon eruption. For example, the stratospheric optical depth varied from 0.002 in October 1982, to 0.021 in January 1983. Polar stratospheric clouds were observed during the Arctic winter, as expected. A representative sample is provided of the ninth 6-month period of data to be used in atmospheric and climatic studies.

  15. On the Dielectric Constant for Acetanilide: Experimental Measurements and Effect on Energy Transport

    NASA Astrophysics Data System (ADS)

    Careri, G.; Compatangelo, E.; Christiansen, P. L.; Halding, J.; Skovgaard, O.

    1987-01-01

    Experimental measurements of the dielectric constant for crystalline acetanilide powder for temperatures ranging from - 140°C to 20°C and for different hydration levels are presented. A Davydov-soliton computer model predicts dramatic changes in the energy transport and storage for typically increased values of the dielectric constant.

  16. Operational experience with VAWT blades. [structural performance

    NASA Technical Reports Server (NTRS)

    Sullivan, W. N.

    1979-01-01

    The structural performance of 17 meter diameter wind turbine rotors is discussed. Test results for typical steady and vibratory stress measurements are summarized along with predicted values of stress based on a quasi-static finite element model.

  17. Evidence for the Need to More Closely Examine School Effects in Value-Added Modeling and Related Accountability Policies

    ERIC Educational Resources Information Center

    Franco, M. Suzanne; Seidel, Kent

    2014-01-01

    Value-added approaches for attributing student growth to teachers often use weighted estimates of building-level factors based on "typical" schools to represent a range of community, school, and other variables related to teacher and student work that are not easily measured directly. This study examines whether such estimates are likely…

  18. Who I Am: The Meaning of Early Adolescents' Most Valued Activities and Relationships, and Implications for Self-Concept Research

    ERIC Educational Resources Information Center

    Tatlow-Golden, Mimi; Guerin, Suzanne

    2017-01-01

    Self-concept research in early adolescence typically measures young people's self-perceptions of competence in specific, adult-defined domains. However, studies have rarely explored young people's own views of valued self-concept factors and their meanings. For two major self domains, the active and the social self, this mixed-methods study…

  19. Precise Measurements of the Masses of Cs, Rb and Na A New Route to the Fine Structure Constant

    NASA Astrophysics Data System (ADS)

    Rainville, Simon; Bradley, Michael P.; Porto, James V.; Thompson, James K.; Pritchard, David E.

    2001-01-01

    We report new values for the atomic masses of the alkali 133Cs, 87Rb, 85Rb, and 23Na with uncertainties ≤ 0.2 ppb. These results, obtained using Penning trap single ion mass spectrometry, are typically two orders of magnitude more accurate than previously measured values. Combined with values of h/m atom from atom interferometry measurements and accurate wavelength measurements for different atoms, these values will lead to new ppb-level determinations of the molar Planck constant N A h and the fine structure constant α. This route to α is based on simple physics. It can potentially achieve the several ppb level of accuracy needed to test the QED determination of α extracted from measurements of the electron g factor. We also demonstrate an electronic cooling technique that cools our detector and ion below the 4 K ambient temperature. This technique improves by about a factor of three our ability to measure the ion's axial motion.

  20. Inequality in societies, academic institutions and science journals: Gini and k-indices

    NASA Astrophysics Data System (ADS)

    Ghosh, Asim; Chattopadhyay, Nachiketa; Chakrabarti, Bikas K.

    2014-09-01

    Social inequality is traditionally measured by the Gini-index (g). The g-index takes values from 0 to 1 where g=0 represents complete equality and g=1 represents complete inequality. Most of the estimates of the income or wealth data indicate the g value to be widely dispersed across the countries of the world: g values typically range from 0.30 to 0.65 at a particular time (year). We estimated similarly the Gini-index for the citations earned by the yearly publications of various academic institutions and the science journals. The ISI web of science data suggests remarkably strong inequality and universality (g=0.70±0.07) across all the universities and institutions of the world, while for the journals we find g=0.65±0.15 for any typical year. We define a new inequality measure, namely the k-index, saying that the cumulative income or citations of (1-k) fraction of people or papers exceed those earned by the fraction (k) of the people or publications respectively. We find, while the k-index value for income ranges from 0.60 to 0.75 for income distributions across the world, it has a value around 0.75±0.05 for different universities and institutions across the world and around 0.77±0.10 for the science journals. Apart from above indices, we also analyze the same institution and journal citation data by measuring Pietra index and median index.

  1. Objective Evaluation of Muscle Strength in Infants with Hypotonia and Muscle Weakness

    ERIC Educational Resources Information Center

    Reus, Linda; van Vlimmeren, Leo A.; Staal, J. Bart; Janssen, Anjo J. W. M.; Otten, Barto J.; Pelzer, Ben J.; Nijhuis-van der Sanden, Maria W. G.

    2013-01-01

    The clinical evaluation of an infant with motor delay, muscle weakness, and/or hypotonia would improve considerably if muscle strength could be measured objectively and normal reference values were available. The authors developed a method to measure muscle strength in infants and tested 81 typically developing infants, 6-36 months of age, and 17…

  2. Measurement of 13C chemical shift tensor principal values with a magic-angle turning experiment.

    PubMed

    Hu, J Z; Orendt, A M; Alderman, D W; Pugmire, R J; Ye, C; Grant, D M

    1994-08-01

    The magic-angle turning (MAT) experiment introduced by Gan is developed into a powerful and routine method for measuring the principal values of 13C chemical shift tensors in powdered solids. A large-volume MAT probe with stable rotation frequencies down to 22 Hz is described. A triple-echo MAT pulse sequence is introduced to improve the quality of the two-dimensional baseplane. It is shown that measurements of the principal values of chemical shift tensors in complex compounds can be enhanced by using either short contact times or dipolar dephasing pulse sequences to isolate the powder patterns from protonated or non-protonated carbons, respectively. A model compound, 1,2,3-trimethoxybenzene, is used to demonstrate these techniques, and the 13C principal values in 2,3-dimethylnaphthalene and Pocahontas coal are reported as typical examples.

  3. Kalman Filtering for Genetic Regulatory Networks with Missing Values

    PubMed Central

    Liu, Qiuhua; Lai, Tianyue; Wang, Wu

    2017-01-01

    The filter problem with missing value for genetic regulation networks (GRNs) is addressed, in which the noises exist in both the state dynamics and measurement equations; furthermore, the correlation between process noise and measurement noise is also taken into consideration. In order to deal with the filter problem, a class of discrete-time GRNs with missing value, noise correlation, and time delays is established. Then a new observation model is proposed to decrease the adverse effect caused by the missing value and to decouple the correlation between process noise and measurement noise in theory. Finally, a Kalman filtering is used to estimate the states of GRNs. Meanwhile, a typical example is provided to verify the effectiveness of the proposed method, and it turns out to be the case that the concentrations of mRNA and protein could be estimated accurately. PMID:28814967

  4. Indriect Measurement Of Nitrogen In A Mult-Component Natural Gas By Heating The Gas

    DOEpatents

    Morrow, Thomas B.; Behring, II, Kendricks A.

    2004-06-22

    Methods of indirectly measuring the nitrogen concentration in a natural gas by heating the gas. In two embodiments, the heating energy is correlated to the speed of sound in the gas, the diluent concentrations in the gas, and constant values, resulting in a model equation. Regression analysis is used to calculate the constant values, which can then be substituted into the model equation. If the diluent concentrations other than nitrogen (typically carbon dioxide) are known, the model equation can be solved for the nitrogen concentration.

  5. Australian aerosol backscatter survey

    NASA Technical Reports Server (NTRS)

    Gras, John L.; Jones, William D.

    1989-01-01

    This paper describes measurements of the atmospheric backscatter coefficient in and around Australia during May and June 1986. One set of backscatter measurements was made with a CO2 lidar operating at 10.6 microns; the other set was obtained from calculations using measured aerosol parameters. Despite the two quite different data collection techniques, there is quite good agreement between the two methods. Backscatter values range from near 1 x 10 to the -8th/m per sr near the surface to 4 - 5 x 10 to the -11th/m per sr in the free troposphere at 5-7-km altitude. The values in the free troposphere are somewhat lower than those typically measured at the same height in the Northern Hemisphere.

  6. Indirect Measurement Of Nitrogen In A Multi-Component Gas By Measuring The Speed Of Sound At Two States Of The Gas.

    DOEpatents

    Morrow, Thomas B.; Behring, II, Kendricks A.

    2004-10-12

    A methods of indirectly measuring the nitrogen concentration in a gas mixture. The molecular weight of the gas is modeled as a function of the speed of sound in the gas, the diluent concentrations in the gas, and constant values, resulting in a model equation. Regression analysis is used to calculate the constant values, which can then be substituted into the model equation. If the speed of sound in the gas is measured at two states and diluent concentrations other than nitrogen (typically carbon dioxide) are known, two equations for molecular weight can be equated and solved for the nitrogen concentration in the gas mixture.

  7. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children.

    PubMed

    Djalal, Farah Mutiasari; Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children's category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults.

  8. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    PubMed Central

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  9. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.

  10. Digital High-Current Monitor

    NASA Technical Reports Server (NTRS)

    Cash, B.

    1985-01-01

    Simple technique developed for monitoring direct currents up to several hundred amperes and digitally displaying values directly in current units. Used to monitor current magnitudes beyond range of standard laboratory ammeters, which typically measure 10 to 20 amperes maximum. Technique applicable to any current-monitoring situation.

  11. [Hypothyreodism. From the latent functional disorder up to coma].

    PubMed

    Hintze, G; Derwahl, M

    2010-05-01

    An autoimmune thyroiditis represents the main reason of hypothyroidism, defined as a lack of thyroid hormone. This autoimmune process results in destruction of functioning thyroid follicles. While subclinical or latent hypothyroidism is defined on the basis of laboratory values (an elevation of TSH with normal peripheral hormone levels), the typical signs and symptoms are associated with hypothyroidism. In about 80% of cases antibodies against thyroid peroxidase can be measured, but only in about 40-50% of cases antibodies against thyroglobulin are detectable. If hypothyrodism has been diagnosed, substitution with levothyroxine should be initiated, with the therapeutic goal to decrease TSH level to the lower normal range. In cases of subclinical hypothyroidism, levothyroxine medication should be started in patients with a high TSH value, positive antibodies and/or the typical ultrasound of autoimmune thyroiditis. However, substitution with levothyroxine in any case of elevated TSH values should be avoided.

  12. Ozone formation in pulsed SDBD in a wide pressure range

    NASA Astrophysics Data System (ADS)

    Starikovskiy, Andrey; Nudnova, Maryia; mipt Team

    2011-10-01

    Ozone concentration in surface anode-directed DBD for wide pressure range (150 - 1300 torr) was experimentally measured. Voltage and pressure effect were investigated. Reduced electric field was measured for anode-directed and cathode-directed SDBD. E/n values in cathode-directed SDBD is higher than in cathode-directed on 50 percent at atmospheric pressure. E/n value increase leads to decrease the rate of oxygen dissociation and Ozone formation at lower pressures. Radiating region thickness of sliding discharge was measured. Typical thickness of radiating zone is 0.4-1.0 mm within pressure range 220-740 torr. It was shown that high-voltage pulsed nanosecond discharge due to high E/n value produces less Ozone with compare to other discharges. Kinetic model was proposed to describe Ozone formation in the pulsed nanosecond SDBD.

  13. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  14. Limits on amplification by Aharonov-Albert-Vaidman weak measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koike, Tatsuhiko; Tanaka, Saki

    2011-12-15

    We analyze the amplification by the Aharonov-Albert-Vaidman weak quantum measurement on a Sagnac interferometer [Dixon et al., Phys. Rev. Lett. 102, 173601 (2009)] up to all orders of the coupling strength between the measured system and the measuring device. The amplifier transforms a small tilt of a mirror into a large transverse displacement of the laser beam. The conventional analysis has shown that the measured value is proportional to the weak value, so that the amplification can be made arbitrarily large in the cost of decreasing output laser intensity. It is shown that the measured displacement and the amplification factormore » are in fact not proportional to the weak value and rather vanish in the limit of infinitesimal output intensity. We derive the optimal overlap of the pre- and postselected states with which the amplification become maximum. We also show that the nonlinear effects begin to arise in the performed experiments so that any improvements in the experiment, typically with an amplification greater than 100, should require the nonlinear theory in translating the observed value to the original displacement.« less

  15. The zeta potential of extended dielectrics and conductors in terms of streaming potential and streaming current measurements.

    PubMed

    Gallardo-Moreno, Amparo M; Vadillo-Rodríguez, Virginia; Perera-Núñez, Julia; Bruque, José M; González-Martín, M Luisa

    2012-07-21

    The electrical characterization of surfaces in terms of the zeta potential (ζ), i.e., the electric potential contributing to the interaction potential energy, is of major importance in a wide variety of industrial, environmental and biomedical applications in which the integration of any material with the surrounding media is initially mediated by the physico-chemical properties of its outer surface layer. Among the different existing electrokinetic techniques for obtaining ζ, streaming potential (V(str)) and streaming current (I(str)) are important when dealing with flat-extended samples. Mostly dielectric materials have been subjected to this type of analysis and only a few papers can be found in the literature regarding the electrokinetic characterization of conducting materials. Nevertheless, a standardized procedure is typically followed to calculate ζ from the measured data and, importantly, it is shown in this paper that such a procedure leads to incorrect zeta potential values when conductors are investigated. In any case, assessment of a reliable numerical value of ζ requires careful consideration of the origin of the input data and the characteristics of the experimental setup. In particular, it is shown that the cell resistance (R) typically obtained through a.c. signals (R(a.c.)), and needed for the calculations of ζ, always underestimates the zeta potential values obtained from streaming potential measurements. The consideration of R(EK), derived from the V(str)/I(str) ratio, leads to reliable values of ζ when dielectrics are investigated. For metals, the contribution of conductivity of the sample to the cell resistance provokes an underestimation of R(EK), which leads to unrealistic values of ζ. For the electrical characterization of conducting samples I(str) measurements constitute a better choice. In general, the findings gathered in this manuscript establish a measurement protocol for obtaining reliable zeta potentials of dielectrics and conductors based on the intrinsic electrokinetic behavior of both types of samples.

  16. I-States-as-Objects-Analysis (ISOA): Extensions of an Approach to Studying Short-Term Developmental Processes by Analyzing Typical Patterns

    ERIC Educational Resources Information Center

    Bergman, Lars R.; Nurmi, Jari-Erik; von Eye, Alexander A.

    2012-01-01

    I-states-as-objects-analysis (ISOA) is a person-oriented methodology for studying short-term developmental stability and change in patterns of variable values. ISOA is based on longitudinal data with the same set of variables measured at all measurement occasions. A key concept is the "i-state," defined as a person's pattern of variable…

  17. Generalized Procedure for Improved Accuracy of Thermal Contact Resistance Measurements for Materials With Arbitrary Temperature-Dependent Thermal Conductivity

    DOE PAGES

    Sayer, Robert A.

    2014-06-26

    Thermal contact resistance (TCR) is most commonly measured using one-dimensional steady-state calorimetric techniques. In the experimental methods we utilized, a temperature gradient is applied across two contacting beams and the temperature drop at the interface is inferred from the temperature profiles of the rods that are measured at discrete points. During data analysis, thermal conductivity of the beams is typically taken to be an average value over the temperature range imposed during the experiment. Our generalized theory is presented and accounts for temperature-dependent changes in thermal conductivity. The procedure presented enables accurate measurement of TCR for contacting materials whose thermalmore » conductivity is any arbitrary function of temperature. For example, it is shown that the standard technique yields TCR values that are about 15% below the actual value for two specific examples of copper and silicon contacts. Conversely, the generalized technique predicts TCR values that are within 1% of the actual value. The method is exact when thermal conductivity is known exactly and no other errors are introduced to the system.« less

  18. Validation of Spacecraft Active Cavity Radiometer Total Solar Irradiance (TSI) Long Term Measurement Trends Using Proxy TSI Least Squares Analyses

    NASA Technical Reports Server (NTRS)

    Lee, Robert Benjamin, III; Wilson, Robert S.

    2003-01-01

    Long-term, incoming total solar irradiance (TSI) measurement trends were validated using proxy TSI values, derived from indices of solar magnetic activity. Spacecraft active cavity radiometers (ACR) are being used to measure longterm TSI variability, which may trigger global climate changes. The TSI, typically referred to as the solar constant, was normalized to the mean earth-sun distance. Studies of spacecraft TSI data sets confirmed the existence of a 0.1 %, long-term TSI variability component within a 10-year period. The 0.1% TSI variability component is clearly present in the spacecraft data sets from the 1984-2004 time frame. Typically, three overlapping spacecraft data sets were used to validate long-term TSI variability trends. However, during the years of 1978-1984, 1989-1991, and 1993-1996, three overlapping spacecraft data sets were not available in order to validate TSI trends. The TSI was found to vary with indices of solar magnetic activity associated with recent 10-year sunspot cycles. Proxy TSI values were derived from least squares analyses of the measured TSI variability with the solar indices of 10.7-cm solar fluxes, and with limb-darked sunspot fluxes. The resulting proxy TSI values were compared to the spacecraft ACR measurements of TSI variability to detect ACR instrument degradation, which may be interpreted as TSI variability. Analyses of ACR measurements and TSI proxies are presented primarily for the 1984-2004, Earth Radiation Budget Experiment (ERBE) ACR solar monitor data set. Differences in proxy and spacecraft measurement data sets suggest the existence of another TSI variability component with an amplitude greater than or equal to 0.5 Wm-2 (0.04%), and with a cycle of 20 years or more.

  19. Inertial Navigation System Standardized Software Development. Volume 1. Introduction and Summary

    DTIC Science & Technology

    1976-06-01

    the Loran receiver, the Tacan receiver, the Omega receiver, the satelite based instrumentation, the multimode radar, the star tracker and the visual...accelerometer scale factor, and the barometric altimeter bias. The accuracy (1o values) of typical navigation-aid measurements (other than satelite derived

  20. Measurements of propeller noise in a light turboprop airplane

    NASA Technical Reports Server (NTRS)

    Wilby, J. F.; Wilby, E. G.

    1987-01-01

    In-flight acoustic measurements have been made on the exterior and interior of a twin-engined turboprop airplane under controlled conditions to study data repeatability. It is found that the variability of the harmonic sound pressure levels in the cabin is greater than that for the exterior sound pressure levels, typical values for the standard deviation being +2.0 dB and -4.2 dB for the interior, versus +1.4 dB and -2.3 dB for the exterior. When insertion losses are determined for acoustic treatments in the cabin, the standard deviations of the data are typically + or - 6.5 dB. It is concluded that additional factors, such as accurate and repeatable selection of relative phase between propellers, controlled cabin-air-temperatures, installation of baseline acoustic absorption, and measurement of aircraft attitude, should be considered in order to reduce uncertainty in the measured data.

  1. The Hubble Constant.

    PubMed

    Jackson, Neal

    2015-01-01

    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H 0 values of around 72-74 km s -1 Mpc -1 , with typical errors of 2-3 km s -1 Mpc -1 . This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68 km s -1 Mpc -1 and typical errors of 1-2 km s -1 Mpc -1 . The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.

  2. Incorporating geographical factors with artificial neural networks to predict reference values of erythrocyte sedimentation rate

    PubMed Central

    2013-01-01

    Background The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Methods and findings Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China. The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Conclusions Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships. PMID:23497145

  3. Incorporating geographical factors with artificial neural networks to predict reference values of erythrocyte sedimentation rate.

    PubMed

    Yang, Qingsheng; Mwenda, Kevin M; Ge, Miao

    2013-03-12

    The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China.The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships.

  4. Designing and Validating Assessments of Complex Thinking in Science

    ERIC Educational Resources Information Center

    Ryoo, Kihyun; Linn, Marcia C.

    2015-01-01

    Typical assessment systems often measure isolated ideas rather than the coherent understanding valued in current science classrooms. Such assessments may motivate students to memorize, rather than to use new ideas to solve complex problems. To meet the requirements of the Next Generation Science Standards, instruction needs to emphasize sustained…

  5. 43 CFR 10005.12 - Policy regarding the scope of measures to be included in the plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the site of the impact typically involves restoration or replacement. Off-site mitigation might involve protection, restoration, or enhancement of a similar resource value at a different location... responsibilities, the Commission sees an obligation to give priority to protection and restoration activities that...

  6. Effects of lint cleaning on lint trash particle size distribution

    USDA-ARS?s Scientific Manuscript database

    Cotton quality trash measurements used today typically yield a single value for trash parameters for a lint sample (i.e. High Volume Instrument – percent area; Advanced Fiber Information System – total count, trash size, dust count, trash count, and visible foreign matter). A Cotton Trash Identifica...

  7. Influence of atypical retardation pattern on the peripapillary retinal nerve fibre distribution assessed by scanning laser polarimetry and optical coherence tomography.

    PubMed

    Schrems, W A; Laemmer, R; Hoesl, L M; Horn, F K; Mardin, C Y; Kruse, F E; Tornow, R P

    2011-10-01

    To investigate the influence of atypical retardation pattern (ARP) on the distribution of peripapillary retinal nerve fibre layer (RNFL) thickness measured with scanning laser polarimetry in healthy individuals and to compare these results with RNFL thickness from spectral domain optical coherence tomography (OCT) in the same subjects. 120 healthy subjects were investigated in this study. All volunteers received detailed ophthalmological examination, GDx variable corneal compensation (VCC) and Spectralis-OCT. The subjects were divided into four subgroups according to their typical scan score (TSS): very typical with TSS=100, typical with 99 ≥ TSS ≥ 91, less typical with 90 ≥ TSS ≥ 81 and atypical with TSS ≤ 80. Deviations from very typical normal values were calculated for 32 sectors for each group. There was a systematic variation of the RNFL thickness deviation around the optic nerve head in the atypical group for the GDxVCC results. The highest percentage deviation of about 96% appeared temporal with decreasing deviation towards the superior and inferior sectors, and nasal sectors exhibited a deviation of 30%. Percentage deviations from very typical RNFL values decreased with increasing TSS. No systematic variation could be found if the RNFL thickness deviation between different TSS-groups was compared with the OCT results. The ARP has a major impact on the peripapillary RNFL distribution assessed by GDx VCC; thus, the TSS should be included in the standard printout.

  8. On the precision of experimentally determined protein folding rates and φ-values

    PubMed Central

    De Los Rios, Miguel A.; Muralidhara, B.K.; Wildes, David; Sosnick, Tobin R.; Marqusee, Susan; Wittung-Stafshede, Pernilla; Plaxco, Kevin W.; Ruczinski, Ingo

    2006-01-01

    φ-Values, a relatively direct probe of transition-state structure, are an important benchmark in both experimental and theoretical studies of protein folding. Recently, however, significant controversy has emerged regarding the reliability with which φ-values can be determined experimentally: Because φ is a ratio of differences between experimental observables it is extremely sensitive to errors in those observations when the differences are small. Here we address this issue directly by performing blind, replicate measurements in three laboratories. By monitoring within- and between-laboratory variability, we have determined the precision with which folding rates and φ-values are measured using generally accepted laboratory practices and under conditions typical of our laboratories. We find that, unless the change in free energy associated with the probing mutation is quite large, the precision of φ-values is relatively poor when determined using rates extrapolated to the absence of denaturant. In contrast, when we employ rates estimated at nonzero denaturant concentrations or assume that the slopes of the chevron arms (mf and mu) are invariant upon mutation, the precision of our estimates of φ is significantly improved. Nevertheless, the reproducibility we thus obtain still compares poorly with the confidence intervals typically reported in the literature. This discrepancy appears to arise due to differences in how precision is calculated, the dependence of precision on the number of data points employed in defining a chevron, and interlaboratory sources of variability that may have been largely ignored in the prior literature. PMID:16501226

  9. Physical characteristics and resistance parameters of typical urban cyclists.

    PubMed

    Tengattini, Simone; Bigazzi, Alexander York

    2018-03-30

    This study investigates the rolling and drag resistance parameters and bicycle and cargo masses of typical urban cyclists. These factors are important for modelling of cyclist speed, power and energy expenditure, with applications including exercise performance, health and safety assessments and transportation network analysis. However, representative values for diverse urban travellers have not been established. Resistance parameters were measured utilizing a field coast-down test for 557 intercepted cyclists in Vancouver, Canada. Masses were also measured, along with other bicycle attributes such as tire pressure and size. The average (standard deviation) of coefficient of rolling resistance, effective frontal area, bicycle plus cargo mass, and bicycle-only mass were 0.0077 (0.0036), 0.559 (0.170) m 2 , 18.3 (4.1) kg, and 13.7 (3.3) kg, respectively. The range of measured values is wider and higher than suggested in existing literature, which focusses on sport cyclists. Significant correlations are identified between resistance parameters and rider and bicycle attributes, indicating higher resistance parameters for less sport-oriented cyclists. The findings of this study are important for appropriately characterising the full range of urban cyclists, including commuters and casual riders.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, L.R.

    Much research has been devoted to measurement of total blood volume (TBV) and cardiac output (CO) in humans but not enough effort has been devoted to collection and reduction of results for the purpose of deriving typical or {open_quotes}reference{close_quotes} values. Identification of normal values for TBV and CO is needed not only for clinical evaluations but also for the development of biokinetic models for ultra-short-lived radionuclides used in nuclear medicine (Leggett and Williams 1989). The purpose of this report is to offer reference values for TBV and CO, along with estimates of the associated uncertainties that arise from intra- andmore » inter-subject variation, errors in measurement techniques, and other sources. Reference values are derived for basal supine CO and TBV in reference adult humans, and differences associated with age, sex, body size, body position, exercise, and other circumstances are discussed.« less

  11. Syllabic patterns in typical and atypical phonological development: ultrasonographic analysis.

    PubMed

    Vassoler, Aline Mara de Oliveira; Berti, Larissa Cristina

    2018-01-01

    Objective The present study aims to compare the production of syllabic patterns of the CVC and CV types performed by Brazilian children with typical and atypical phonological development through ultrasonography of tongue. Methods Ten children (five with typical and with five atypical phonological development) recorded nine pairs of words from the syllables: CCV and CV. The images and audios were captured simultaneously by the Articulate Assistant Advanced software. The data were submitted to perceptive analysis and ultrasonographic articulatory analysis (the area between the tip and the blade of the tongue). The area measurements were submitted to one-way repeated measures ANOVA. Results ANOVA demonstrated a significant effect for the clinical condition (typical and atypical), (F (1.8) = 172.48, p> 0.000) forthe area measurements. In both syllabic patterns (CCV and CV) the atypical children showed greater values ​​of the area between the tip and the blade of the tongue. Regarding the syllabic patterns analyzed, the statistical test showed no significant effect (F (1.8)=0.19, p>0.658). Conclusion The use of a greater area of ​​the tongue by children with atypical phonological development suggests the non-differentiation of the tip and the anterior body gestures of the tongue in the production of CV and CCV.

  12. Individual Fit Testing of Hearing Protection Devices Based on Microphone in Real Ear.

    PubMed

    Biabani, Azam; Aliabadi, Mohsen; Golmohammadi, Rostam; Farhadian, Maryam

    2017-12-01

    Labeled noise reduction (NR) data presented by manufacturers are considered one of the main challenging issues for occupational experts in employing hearing protection devices (HPDs). This study aimed to determine the actual NR data of typical HPDs using the objective fit testing method with a microphone in real ear (MIRE) method. Five available commercially earmuff protectors were investigated in 30 workers exposed to reference noise source according to the standard method, ISO 11904-1. Personal attenuation rating (PAR) of the earmuffs was measured based on the MIRE method using a noise dosimeter (SVANTEK, model SV 102). The results showed that means of PAR of the earmuffs are from 49% to 86% of the nominal NR rating. The PAR values of earmuffs when a typical eyewear was worn differed statistically ( p < 0.05). It is revealed that a typical safety eyewear can reduce the mean of the PAR value by approximately 2.5 dB. The results also showed that measurements based on the MIRE method resulted in low variability. The variability in NR values between individuals, within individuals, and within earmuffs was not the statistically significant ( p > 0.05). This study could provide local individual fit data. Ergonomic aspects of the earmuffs and different levels of users experience and awareness can be considered the main factors affecting individual fitting compared with the laboratory condition for acquiring the labeled NR data. Based on the obtained fit testing results, the field application of MIRE can be employed for complementary studies in real workstations while workers perform their regular work duties.

  13. Measuring Values in Environmental Research: A Test of an Environmental Portrait Value Questionnaire

    PubMed Central

    Bouman, Thijs; Steg, Linda; Kiers, Henk A. L.

    2018-01-01

    Four human values are considered to underlie individuals’ environmental beliefs and behaviors: biospheric (i.e., concern for environment), altruistic (i.e., concern for others), egoistic (i.e., concern for personal resources) and hedonic values (i.e., concern for pleasure and comfort). These values are typically measured with an adapted and shortened version of the Schwartz Value Survey (SVS), to which we refer as the Environmental-SVS (E-SVS). Despite being well-validated, recent research has indicated some concerns about the SVS methodology (e.g., comprehensibility, self-presentation biases) and suggested an alternative method of measuring human values: The Portrait Value Questionnaire (PVQ). However, the PVQ has not yet been adapted and applied to measure values most relevant to understand environmental beliefs and behaviors. Therefore, we tested the Environmental-PVQ (E-PVQ) – a PVQ variant of E-SVS –and compared it with the E-SVS in two studies. Our findings provide strong support for the validity and reliability of both the E-SVS and E-PVQ. In addition, we find that respondents slightly preferred the E-PVQ over the E-SVS (Study 1). In general, both scales correlate similarly to environmental self-identity (Study 1), energy behaviors (Studies 1 and 2), pro-environmental personal norms, climate change beliefs and policy support (Study 2). Accordingly, both methodologies show highly similar results and seem well-suited for measuring human values underlying environmental behaviors and beliefs. PMID:29743874

  14. Use of Low-Value Pediatric Services Among the Commercially Insured

    PubMed Central

    Schwartz, Aaron L.; Volerman, Anna; Conti, Rena M.; Huang, Elbert S.

    2016-01-01

    BACKGROUND: Claims-based measures of “low-value” pediatric services could facilitate the implementation of interventions to reduce the provision of potentially harmful services to children. However, few such measures have been developed. METHODS: We developed claims-based measures of 20 services that typically do not improve child health according to evidence-based guidelines (eg, cough and cold medicines). Using these measures and claims from 4.4 million commercially insured US children in the 2014 Truven MarketScan Commercial Claims and Encounters database, we calculated the proportion of children who received at least 1 low-value pediatric service during the year, as well as total and out-of-pocket spending on these services. We report estimates based on "narrow" measures designed to only capture instances of service use that were low-value. To assess the sensitivity of results to measure specification, we also reported estimates based on "broad measures" designed to capture most instances of service use that were low-value. RESULTS: According to the narrow measures, 9.6% of children in our sample received at least 1 of the 20 low-value services during the year, resulting in $27.0 million in spending, of which $9.2 million was paid out-of-pocket (33.9%). According to the broad measures, 14.0% of children in our sample received at least 1 of the 20 low-value services during the year. CONCLUSIONS: According to a novel set of claims-based measures, at least 1 in 10 children in our sample received low-value pediatric services during 2014. Estimates of low-value pediatric service use may vary substantially with measure specification. PMID:27940698

  15. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  16. The noise environment of a school classroom due to the operation of utility helicopters. [acoustic measurements of helicopter noise during flight over building

    NASA Technical Reports Server (NTRS)

    Hilton, D. A.; Pegg, R. J.

    1974-01-01

    Noise measurements under controlled conditions have been made inside and outside of a school building during flyover operations of four different helicopters. The helicopters were operated at a condition considered typical for a police patrol mission. Flyovers were made at an altitude of 500 ft and an airspeed of 45 miles per hour. During these operations acoustic measurements were made inside and outside of the school building with the windows closed and then open. The outside noise measurements during helicopter flyovers indicate that the outside db(A) levels were approximately the same for all test helicopters. For the windows closed case, significant reductions for the inside measured db(A) values were noted for all overflights. These reductions were approximately 20 db(A); similar reductions were noted in other subjective measuring units. The measured internal db(A) levels with the windows open exceeded published classroom noise criteria values; however, for the windows-closed case they are in general agreement with the criteria values.

  17. [Characteristics of foliar delta13C values of common shrub species in various microhabitats with different karst rocky desertification degrees].

    PubMed

    Du, Xue-Lian; Wang, Shi-Jie; Rong, Li

    2011-12-01

    By measuring the foliar delta13C values of 5 common shrub species (Rhamnus davurica, Pyracantha fortuneana, Rubus biflorus, Zanthoxylum planispinum, and Viburnum utile) growing in various microhabitats in Wangjiazhai catchment, a typical karst desertification area in Guizhou Province, this paper studied the spatial heterogeneity of plant water use at niche scale and the response of the heterogeneity to different karst rocky desertification degrees. The foliar delta13C values of the shrub species in the microhabitats followed the order of stony surface > stony gully > stony crevice > soil surface, and those of the majority of the species were more negative in the microhabitat soil surface than in the others. The foliar delta13C values decreased in the sequence of V. utile > R. biflorus > Z. planispinum > P. fortuneana > R. davurica, and the mean foliar delta13C value of the shrubs and that of typical species in various microhabitats all increased with increasing karst rocky desertification degree, differed significantly among different microhabitats. It was suggested that with the increasing degree of karst rocky desertification, the structure and functions of karst habitats were impaired, microhabitats differentiated gradually, and drought degree increased.

  18. Quantitative Magnetic Resonance Diffusion-Weighted Imaging Evaluation of the Supratentorial Brain Regions in Patients Diagnosed with Brainstem Variant of Posterior Reversible Encephalopathy Syndrome: A Preliminary Study.

    PubMed

    Chen, Tai-Yuan; Wu, Te-Chang; Ko, Ching-Chung; Feng, I-Jung; Tsui, Yu-Kun; Lin, Chien-Jen; Chen, Jeon-Hor; Lin, Ching-Po

    2017-07-01

    Posterior reversible encephalopathy syndrome (PRES) is a clinicoradiologic entity with several causes, characterized by rapid onset of symptoms and typical neuroimaging features, which usually resolve if promptly recognized and treated. Brainstem variant of PRES presents with vasogenic edema in brainstem regions on magnetic resonance (MR) images and there is sparing of the supratentorial regions. Because PRES is usually caused by a hypertensive crisis, which would likely have a systemic effect and global manifestations on the brain tissue, we thus proposed that some microscopic abnormalities of the supratentorial regions could be detected with diffusion-weighted imaging (DWI) using apparent diffusion coefficient (ADC) analysis in brainstem variant of PRES and hypothesized that "normal-looking" supratentorial regions will increase water diffusion. We retrospectively identified patients with PRES who underwent brain magnetic resonance imaging studies. We identified 11 brainstem variants of PRES patients, who formed the study cohort, and 11 typical PRES patients and 20 normal control subjects as the comparison cohorts for this study. Nineteen regions of interest were drawn and systematically placed. The mean ADC values were measured and compared among these 3 groups. ADC values of the typical PRES group were consistently elevated compared with those in normal control subjects. ADC values of the brainstem variant group were consistently elevated compared with those in normal control subjects. ADC values of the typical PRES group and brainstem variant group did not differ significantly, except for the pons area. Quantitative MR DWI may aid in the evaluation of supratentorial microscopic abnormalities in brainstem variant of PRES patients. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  19. Classification of typical and atypical antipsychotic drugs on the basis of dopamine D-1, D-2 and serotonin2 pKi values.

    PubMed

    Meltzer, H Y; Matsubara, S; Lee, J C

    1989-10-01

    The pKi values of 13 reference typical and 7 reference atypical antipsychotic drugs (APDs) for rat striatal dopamine D-1 and D-2 receptor binding sites and cortical serotonin (5-HT2) receptor binding sites were determined. The atypical antipsychotics had significantly lower pKi values for the D-2 but not 5-HT2 binding sites. There was a trend for a lower pKi value for the D-1 binding site for the atypical APD. The 5-HT2 and D-1 pKi values were correlated for the typical APD whereas the 5-HT2 and D-2 pKi values were correlated for the atypical APD. A stepwise discriminant function analysis to determine the independent contribution of each pKi value for a given binding site to the classification as a typical or atypical APD entered the D-2 pKi value first, followed by the 5-HT2 pKi value. The D-1 pKi value was not entered. A discriminant function analysis correctly classified 19 of 20 of these compounds plus 14 of 17 additional test compounds as typical or atypical APD for an overall correct classification rate of 89.2%. The major contributors to the discriminant function were the D-2 and 5-HT2 pKi values. A cluster analysis based only on the 5-HT2/D2 ratio grouped 15 of 17 atypical + one typical APD in one cluster and 19 of 20 typical + two atypical APDs in a second cluster, for an overall correct classification rate of 91.9%. When the stepwise discriminant function was repeated for all 37 compounds, only the D-2 and 5-HT2 pKi values were entered into the discriminant function.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. A correlation study of eye lens dose and personal dose equivalent for interventional cardiologists.

    PubMed

    Farah, J; Struelens, L; Dabin, J; Koukorava, C; Donadille, L; Jacob, S; Schnelzer, M; Auvinen, A; Vanhavere, F; Clairand, I

    2013-12-01

    This paper presents the dosimetry part of the European ELDO project, funded by the DoReMi Network of Excellence, in which a method was developed to estimate cumulative eye lens doses for past practices based on personal dose equivalent values, H(p)(10), measured above the lead apron at several positions at the collar, chest and waist levels. Measurement campaigns on anthropomorphic phantoms were carried out in typical interventional settings considering different tube projections and configurations, beam energies and filtration, operator positions and access routes and using both mono-tube and biplane X-ray systems. Measurements showed that eye lens dose correlates best with H(p)(10) measured on the left side of the phantom at the level of the collar, although this correlation implicates high spreads (41 %). Nonetheless, for retrospective dose assessment, H(p)(10) records are often the only option for eye dose estimates and the typically used chest left whole-body dose measurement remains useful.

  1. Extending the range of turbidity measurement using polarimetry

    DOEpatents

    Baba, Justin S.

    2017-11-21

    Turbidity measurements are obtained by directing a polarized optical beam to a scattering sample. Scattered portions of the beam are measured in orthogonal polarization states to determine a scattering minimum and a scattering maximum. These values are used to determine a degree of polarization of the scattered portions of the beam, and concentrations of scattering materials or turbidity can be estimated using the degree of polarization. Typically, linear polarizations are used, and scattering is measured along an axis that orthogonal to the direction of propagation of the polarized optical beam.

  2. Environmental degradation and remediation: is economics part of the problem?

    PubMed

    Dore, Mohammed H I; Burton, Ian

    2003-01-01

    It is argued that standard environmental economic and 'ecological economics', have the same fundamentals of valuation in terms of money, based on a demand curve derived from utility maximization. But this approach leads to three different measures of value. An invariant measure of value exists only if the consumer has 'homothetic preferences'. In order to obtain a numerical estimate of value, specific functional forms are necessary, but typically these estimates do not converge. This is due to the fact that the underlying economic model is not structurally stable. According to neoclassical economics, any environmental remediation can be justified only in terms of increases in consumer satisfaction, balancing marginal gains against marginal costs. It is not surprising that the optimal policy obtained from this approach suggests only small reductions in greenhouse gases. We show that a unidimensional metric of consumer's utility measured in dollar terms can only trivialize the problem of global climate change.

  3. Electron Affinity of Phenyl-C61-Butyric Acid Methyl Ester (PCBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Bryon W.; Whitaker, James B.; Wang, Xue B.

    2013-07-25

    The gas-phase electron affinity (EA) of phenyl-C61-butyric acid methyl ester (PCBM), one of the best-performing electron acceptors in organic photovoltaic devices, is measured by lowtemperature photoelectron spectroscopy for the first time. The obtained value of 2.63(1) eV is only ca. 0.05 eV lower than that of C60 (2.68(1) eV), compared to a 0.09 V difference in their E1/2 values measured in this work by cyclic voltammetry. Literature E(LUMO) values for PCBM that are typically estimated from cyclic voltammetry, and commonly used as a quantitative measure of acceptor properties, are dispersed over a wide range between -4.3 and -3.62 eV; themore » reasons for such a huge discrepancy are analyzed here, and the protocol for reliable and consistent estimations of relative fullerene-based acceptor strength in solution is proposed.« less

  4. Sensitivity of solar-cell performance to atmospheric variables. 1: Single cell

    NASA Technical Reports Server (NTRS)

    Klucher, T. M.

    1976-01-01

    The short circuit current of a typical silicon solar cell under direct solar radiation was measured for a range of turbidity, water vapor content, and air mass to determine the relation of the solar cell calibration value (current-to-intensity ratio) to those atmospheric variables. A previously developed regression equation was modified to describe the relation between calibration value, turbidity, water vapor content, and air mass. Based on the value of the constants obtained by a least squares fit of the data to the equation, it was found that turbidity lowers the value, while increase in water vapor increases the calibration value. Cell calibration values exhibited a change of about 6% over the range of atmospheric conditions experienced.

  5. Relative Fundamental Frequency Distinguishes between Phonotraumatic and Non-Phonotraumatic Vocal Hyperfunction

    ERIC Educational Resources Information Center

    Murray, Elizabeth S. Heller; Lien, Yu-An S.; Van Stan, Jarrad H.; Mehta, Daryush D.; Hillman, Robert E.; Noordzij, J. Pieter; Stepp, Cara E.

    2017-01-01

    Purpose: The purpose of this article is to examine the ability of an acoustic measure, relative fundamental frequency (RFF), to distinguish between two subtypes of vocal hyperfunction (VH): phonotraumatic (PVH) and non-phonotraumatic (NPVH). Method: RFF values were compared among control individuals with typical voices (N = 49), individuals with…

  6. 77 FR 3559 - Energy Conservation Program for Consumer Products: Test Procedures for Refrigerators...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ..., which is typical of an approach enabled by more sophisticated electronic controls. Id. The interim final... and long- time automatic defrost or variable defrost control and adjust the default values of maximum... accurate measurement of the energy use of products with variable defrost control. DATES: The amendments are...

  7. Equipment Health Monitoring with Non-Parametric Statistics for Online Early Detection and Scoring of Degradation

    DTIC Science & Technology

    2014-10-02

    defined by Eqs. (3)–(4) (Greenwell & Finch , 2004) (Kar & Mohanty, 2006). The p value provides the metric for novelty scoring. p = QKS(z) = 2 ∞∑ j=1 (−1...provides early detection of degradation and ability to score its significance in order to inform maintenance planning and consequently reduce disruption ...actionable information, sig- nals are typically processed from raw measurements into a reduced dimension novelty summary value that may be more easily

  8. Dynamics of water confined in lyotropic liquid crystals: Molecular dynamics simulations of the dynamic structure factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantha, Sriteja; Yethiraj, Arun

    2016-02-24

    The properties of water under confinement are of practical and fundamental interest. Here in this work we study the properties of water in the self-assembled lyotropic phases of gemini surfactants with a focus on testing the standard analysis of quasi-elastic neutron scattering (QENS) experiments. In QENS experiments the dynamic structure factor is measured and fit to models to extract the translational diffusion constant, D T , and rotational relaxation time, τ R. We test this procedure by using simulation results for the dynamic structure factor, extracting the dynamic parameters from the fit as is typically done in experiments, and comparingmore » the values to those directly measured in the simulations. We find that the decoupling approximation, where the intermediate scattering function is assumed to be a product of translational and rotational contributions, is quite accurate. The jump-diffusion and isotropic rotation models, however, are not accurate when the degree of confinement is high. In particular, the exponential approximations for the intermediate scattering function fail for highly confined water and the values of D T and τ R can differ from the measured value by as much as a factor of two. Other models have more fit parameters, however, and with the range of energies and wave-vectors accessible to QENS, the typical analysis appears to be the best choice. In the most confined lamellar phase, the dynamics are sufficiently slow that QENS does not access a large enough time scale and neutron spin echo measurements would be a valuable technique in addition to QENS.« less

  9. Perceived cultural importance and actual self-importance of values in cultural identification.

    PubMed

    Wan, Ching; Chiu, Chi-yue; Tam, Kim-pong; Lee, Sau-lai; Lau, Ivy Yee-man; Peng, Siqing

    2007-02-01

    Cross-cultural psychologists assume that core cultural values define to a large extent what a culture is. Typically, core values are identified through an actual self-importance approach, in which core values are those that members of the culture as a group strongly endorse. In this article, the authors propose a perceived cultural importance approach to identifying core values, in which core values are values that members of the culture as a group generally believe to be important in the culture. In 5 studies, the authors examine the utility of the perceived cultural importance approach. Results consistently showed that, compared with values of high actual self-importance, values of high perceived cultural importance play a more important role in cultural identification. These findings have important implications for conceptualizing and measuring cultures. ((c) 2007 APA, all rights reserved).

  10. Development of a computer program to generate typical measurement values for various systems on a space station

    NASA Technical Reports Server (NTRS)

    Deacetis, Louis A.

    1987-01-01

    The elements of a simulation program written in Ada were developed. The program will eventually serve as a data generator of typical readings from various space station equipment involved with Communications and Tracking, and will simulate various scenarios that may arise due to equipment malfunction or failure, power failure, etc. In addition, an evaluation of the Ada language was made from the viewpoint of a FORTRAN programmer learning Ada for the first time. Various strengths and difficulties associated with the learning and use of Ada are considered.

  11. [Determination of radioactivity by smartphones].

    PubMed

    Hartmann, H; Freudenberg, R; Andreeff, M; Kotzerke, J

    2013-01-01

    The interest in the detection of radioactive materials has strongly increased after the accident in the nuclear power plant Fukushima and has led to a bottleneck of suitable measuring instruments. Smartphones equipped with a commercially available software tool could be used for dose rate measurements following a calibration according to the specific camera module. We examined whether such measurements provide reliable data for typical activities and radionuclides in nuclear medicine. For the nuclides 99mTc (10 - 1000 MBq), 131I (3.7 - 1800 MBq, therapy capsule) and 68Ga (50 - 600 MBq) radioactivity with defined geometry in different distances was measured. The smartphones Milestone Droid 1 (Motorola) and HTC Desire (HTC Corporation) were compared with the standard instruments AD6 (automess) and DoseGUARD (AEA Technology). Measurements with the smartphones and the other devices show a good agreement: linear signal increase with rising activity and dose rate. The long time measurement (131I, 729 MBq, 0.5 m, 60 min) demonstrates a considerably higher variation (by 20%) of the measured smartphone data values compared with the AD6. For low dose rates (< 1 µGy/h), the sensitivity decreases so that measurements of e. g. the natural radiation exposure do not lead to valid results. The calibration of the camera responsivity for the smartphone has a big influence on the results caused by the small detector surface of the camera semiconductor. With commercial software the camera module of a smartphone can be used for the measurement of radioactivity. Dose rates resulting from typical nuclear medicine procedures can be measured reliably (e. g., dismissal dose after radioiodine therapy). The signal shows a high correlation to measured values of conventional dose measurement devices.

  12. The influence of environmental factors on the deposition velocity of thoron progeny.

    PubMed

    Li, H; Zhang, L; Guo, Q

    2012-11-01

    Passive measuring devices are comprehensively employed in thoron progeny surveys, while the deposition velocity of thoron progeny is the most critical parameter, which varies in different environments. In this study, to analyse the influence of environmental factors on thoron progeny deposition velocity, an improved model was proposed on the basis of Lai's aerosol deposition model and the Jacobi's model, and a series of measurements were carried out to verify the model. According to the calculations, deposition velocity decreases with increasing aerosol diameter and also aerosol concentration, while increases with increasing ventilation rate. In typical indoor environments, a typical value of 1.26 × 10(-5)m s(-1) is recommended, with a range between 7.6 × 10(-7) and 3.2 × 10(-4) m s(-1).

  13. Preamplifiers for non-contact capacitive biopotential measurements.

    PubMed

    Peng, GuoChen; Ignjatovic, Zeljko; Bocko, Mark F

    2013-01-01

    Non-contact biopotential sensing is an attractive measurement strategy for a number of health monitoring applications, primarily the ECG and the EEG. In all such applications a key technical challenge is the design of a low-noise trans-impedance preamplifier for the typically low-capacitance, high source impedance sensing electrodes. In this paper, we compare voltage and charge amplifier designs in terms of their common mode rejection ratio, noise performance, and frequency response. Both amplifier types employ the same operational-transconductance amplifier (OTA), which was fabricated in a 0.35 um CMOS process. The results show that a charge amplifier configuration has advantages for small electrode-to-subject coupling capacitance values (less than 10 pF--typical of noncontact electrodes) and that the voltage amplifier configuration has advantages for electrode capacitances above 10 pF.

  14. Dancing the aerobics ''hearing loss'' choreography

    NASA Astrophysics Data System (ADS)

    Pinto, Beatriz M.; Carvalho, Antonio P. O.; Gallagher, Sergio

    2002-11-01

    This paper presents an overview of gymnasiums' acoustic problems when used for aerobics exercises classes (and similar) with loud noise levels of amplified music. This type of gymnasium is usually a highly reverberant space, which is a consequence of a large volume surrounded by hard surfaces. A sample of five schools in Portugal was chosen for this survey. Noise levels in each room were measured using a precision sound level meter, and analyzed to calculate the standardized daily personal noise exposure levels (LEP,d). LEP,d values from 79 to 91 dB(A) were found to be typical values in this type of room, inducing a health risk for its occupants. The reverberation time (RT) values were also measured and compared with some European legal requirements (Portugal, France, and Belgium) for nearly similar situations. RT values (1 kHz) from 0.9 s to 2.8 s were found. These reverberation time values clearly differentiate between good and acoustically inadequate rooms. Some noise level and RT limits for this type of environment are given and suggestions for the improvement of the acoustical environment are shown. Significant reductions in reverberation time values and noise levels can be obtained by simple measures.

  15. Reliability and Validity of a New Test of Agility and Skill for Female Amateur Soccer Players

    PubMed Central

    Kutlu, Mehmet; Yapici, Hakan; Yilmaz, Abdullah

    2017-01-01

    Abstract The aim of this study was to evaluate the Agility and Skill Test, which had been recently developed to assess agility and skill in female athletes. Following a 10 min warm-up, two trials to test the reliability and validity of the test were conducted one week apart. Measurements were collected to compare soccer players’ physical performance in a 20 m sprint, a T-Drill test, the Illinois Agility Run Test, change-of-direction and acceleration, as well as agility and skill. All tests were completed following the same order. Thirty-four amateur female soccer players were recruited (age = 20.8 ± 1.9 years; body height = 166 ± 6.9 cm; body mass = 55.5 ± 5.8 kg). To determine the reliability and usefulness of these tests, paired sample t-tests, intra-class correlation coefficients, typical error, coefficient of variation, and differences between the typical error and smallest worthwhile change statistics were computed. Test results showed no significant differences between the two sessions (p > 0.01). There were higher intra-class correlations between the test and retest values (r = 0.94–0.99) for all tests. Typical error values were below the smallest worthwhile change, indicating ‘good’ usefulness for these tests. A near perfect Pearson correlation between the Agility and Skill Test (r = 0.98) was found, and there were moderate-to-large levels of correlation between the Agility and Skill Test and other measures (r = 0.37 to r = 0.56). The results of this study suggest that the Agility and Skill Test is a reliable and valid test for female soccer players and has significant value for assessing the integrative agility and skill capability of soccer players. PMID:28469760

  16. [Evaluation of the quality of three-dimensional data acquired by using two kinds of structure light intra-oral scanner to scan the crown preparation model].

    PubMed

    Zhang, X Y; Li, H; Zhao, Y J; Wang, Y; Sun, Y C

    2016-07-01

    To quantitatively evaluate the quality and accuracy of three-dimensional (3D) data acquired by using two kinds of structure intra-oral scanner to scan the typical teeth crown preparations. Eight typical teeth crown preparations model were scanned 3 times with two kinds of structured light intra-oral scanner(A, B), as test group. A high precision model scanner were used to scan the model as true value group. The data above the cervical margin was extracted. The indexes of quality including non-manifold edges, the self-intersections, highly-creased edges, spikes, small components, small tunnels, small holes and the anount of triangles were measured with the tool of mesh doctor in Geomagic studio 2012. The scanned data of test group were aligned to the data of true value group. 3D deviations of the test group compared with true value group were measured for each scanned point, each preparation and each group. Independent-samples Mann-Whitney U test was applied to analyze 3D deviations for each scanned point of A and B group. Correlation analysis was applied to index values and 3D deviation values. The total number of spikes in A group was 96, and that in B group and true value group were 5 and 0 respectively. Trueness: A group 8.0 (8.3) μm, B group 9.5 (11.5) μm(P>0.05). Correlation analysis of the number of spikes with data precision of A group was r=0.46. In the study, the qulity of the scanner B is better than scanner A, the difference of accuracy is not statistically significant. There is correlation between quality and data precision of the data scanned with scanner A.

  17. Adjusting Estimates of the Expected Value of Information for Implementation: Theoretical Framework and Practical Application.

    PubMed

    Andronis, Lazaros; Barton, Pelham M

    2016-04-01

    Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.

  18. A High-Resolution Measurement of Ball IR Black Paint's Low-Temperature Emissivity

    NASA Technical Reports Server (NTRS)

    Tuttle, Jim; Canavan, Ed; DiPirro, Mike; Li, Xiaoyi; Franck, Randy; Green, Dan

    2011-01-01

    High-emissivity paints are commonly used on thermal control system components. The total hemispheric emissivity values of such paints are typically high (nearly 1) at temperatures above about 100 Kelvin, but they drop off steeply at lower temperatures. A precise knowledge of this temperature-dependence is critical to designing passively-cooled components with low operating temperatures. Notable examples are the coatings on thermal radiators used to cool space-flight instruments to temperatures below 40 Kelvin. Past measurements of low-temperature paint emissivity have been challenging, often requiring large thermal chambers and typically producing data with high uncertainties below about 100 Kelvin. We describe a relatively inexpensive method of performing high-resolution emissivity measurements in a small cryostat. We present the results of such a measurement on Ball InfraRed BlackTM(BIRBTM), a proprietary surface coating produced by Ball Aerospace and Technologies Corp (BATC), which is used in spaceflight applications. We also describe a thermal model used in the error analysis.

  19. Paramagnetic ionic liquids for measurements of density using magnetic levitation.

    PubMed

    Bwambok, David K; Thuo, Martin M; Atkinson, Manza B J; Mirica, Katherine A; Shapiro, Nathan D; Whitesides, George M

    2013-09-03

    Paramagnetic ionic liquids (PILs) provide new capabilities to measurements of density using magnetic levitation (MagLev). In a typical measurement, a diamagnetic object of unknown density is placed in a container containing a PIL. The container is placed between two magnets (typically NdFeB, oriented with like poles facing). The density of the diamagnetic object can be determined by measuring its position in the magnetic field along the vertical axis (levitation height, h), either as an absolute value or relative to internal standards of known density. For density measurements by MagLev, PILs have three advantages over solutions of paramagnetic salts in aqueous or organic solutions: (i) negligible vapor pressures; (ii) low melting points; (iii) high thermal stabilities. In addition, the densities, magnetic susceptibilities, glass transition temperatures, thermal decomposition temperatures, viscosities, and hydrophobicities of PILs can be tuned over broad ranges by choosing the cation-anion pair. The low melting points and high thermal stabilities of PILs provide large liquidus windows for density measurements. This paper demonstrates applications and advantages of PILs in density-based analyses using MagLev.

  20. Ecosystem services - from assessements of estimations to quantitative, validated, high-resolution, continental-scale mapping via airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Pfeifer, Norbert

    2016-04-01

    "Ecosystem services" defined vaguely as "nature's benefits to people" are a trending concept in ecology and conservation. Quantifying and mapping these services is a longtime demand of both ecosystems science and environmental policy. The current state of the art is to use existing maps of land cover, and assign certain average ecosystem service values to their unit areas. This approach has some major weaknesses: the concept of "ecosystem services", the input land cover maps and the value indicators. Such assessments often aim at valueing services in terms of human currency as a basis for decision-making, although this approach remains contested. Land cover maps used for ecosystem service assessments (typically the CORINE land cover product) are generated from continental-scale satellite imagery, with resolution in the range of hundreds of meters. In some rare cases, airborne sensors are used, with higher resolution but less covered area. Typically, general land cover classes are used instead of categories defined specifically for the purpose of ecosystem service assessment. The value indicators are developed for and tested on small study sites, but widely applied and adapted to other sites far away (a process called benefit transfer) where local information may not be available. Upscaling is always problematic since such measurements investigate areas much smaller than the output map unit. Nevertheless, remote sensing is still expected to play a major role in conceptualization and assessment of ecosystem services. We propose that an improvement of several orders of magnitude in resolution and accuracy is possible through the application of airborne LIDAR, a measurement technique now routinely used for collection of countrywide three-dimensional datasets with typically sub-meter resolution. However, this requires a clear definition of the concept of ecosystem services and the variables in focus: remote sensing can measure variables closely related to "ecosystem service potential" which is the ability of the local ecosystem to deliver various functions (water retention, carbon storage etc.), but can't quantify how much of these are actually used by humans or what the estimated monetary value is. Due to its ability to measure both terrain relief and vegetation structure in high resolution, airborne LIDAR supports direct quantification of the properties of an ecosystem that lead to it delivering a given service (such as biomass, water retention, micro-climate regulation or habitat diversity). In addition, its high resolution allows direct calibration with field measurements: routine harvesting-based ecological measurements, local biodiversity indicator surveys or microclimate recordings all take place at the human scale and can be directly linked to the local value of LIDAR-based indicators at meter resolution. Therefore, if some field measurements with standard ecological methods are performed on site, the accuracy of LIDAR-based ecosystem service indicators can be rigorously validated. With this conceptual and technical approach high resolution ecosystem service assessments can be made with well established credibility. These would consolidate the concept of ecosystem services and support both scientific research and evidence-based environmental policy at local and - as data coverage is continually increasing - continental scale.

  1. Atmospheric components of the surface energy budget over young sea ice: Results from the N-ICE2015 campaign

    NASA Astrophysics Data System (ADS)

    Walden, Von P.; Hudson, Stephen R.; Cohen, Lana; Murphy, Sarah Y.; Granskog, Mats A.

    2017-08-01

    The Norwegian young sea ice campaign obtained the first measurements of the surface energy budget over young, thin Arctic sea ice through the seasonal transition from winter to summer. This campaign was the first of its kind in the North Atlantic sector of the Arctic. This study describes the atmospheric and surface conditions and the radiative and turbulent heat fluxes over young, thin sea ice. The shortwave albedo of the snow surface ranged from about 0.85 in winter to 0.72-0.80 in early summer. The near-surface atmosphere was typically stable in winter, unstable in spring, and near neutral in summer once the surface skin temperature reached 0°C. The daily average radiative and turbulent heat fluxes typically sum to negative values (-40 to 0 W m-2) in winter but then transition toward positive values of up to nearly +60 W m-2 as solar radiation contributes significantly to the surface energy budget. The sensible heat flux typically ranges from +20-30 W m-2 in winter (into the surface) to negative values between 0 and -20 W m-2 in spring and summer. A winter case study highlights the significant effect of synoptic storms and demonstrates the complex interplay of wind, clouds, and heat and moisture advection on the surface energy components over sea ice in winter. A spring case study contrasts a rare period of 24 h of clear-sky conditions with typical overcast conditions and highlights the impact of clouds on the surface radiation and energy budgets over young, thin sea ice.

  2. Valuing urban open space using the travel-cost method and the implications of measurement error.

    PubMed

    Hanauer, Merlin M; Reid, John

    2017-08-01

    Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The effect of surface anisotropy and viewing geometry on the estimation of NDVI from AVHRR

    USGS Publications Warehouse

    Meyer, David; Verstraete, M.; Pinty, B.

    1995-01-01

    Since terrestrial surfaces are anisotropic, all spectral reflectance measurements obtained with a small instantaneous field of view instrument are specific to these angular conditions, and the value of the corresponding NDVI, computed from these bidirectional reflectances, is relative to the particular geometry of illumination and viewing at the time of the measurement. This paper documents the importance of these geometric effects through simulations of the AVHRR data acquisition process, and investigates the systematic biases that result from the combination of ecosystem-specific anisotropies with instrument-specific sampling capabilities. Typical errors in the value of NDVI are estimated, and strategies to reduce these effects are explored. -from Authors

  4. ARAS: an automated radioactivity aliquoting system for dispensing solutions containing positron-emitting radioisotopes

    DOE PAGES

    Dooraghi, Alex A.; Carroll, Lewis; Collins, Jeffrey; ...

    2016-03-09

    Automated protocols for measuring and dispensing solutions containing radioisotopes are essential not only for providing a safe environment for radiation workers but also to ensure accuracy of dispensed radioactivity and an efficient workflow. For this purpose, we have designed ARAS, an automated radioactivity aliquoting system for dispensing solutions containing positron-emitting radioisotopes with particular focus on fluorine-18 (18F). The key to the system is the combination of a radiation detector measuring radioactivity concentration, in line with a peristaltic pump dispensing known volumes. Results show the combined system demonstrates volume variation to be within 5 % for dispensing volumes of 20 μLmore » or greater. When considering volumes of 20 μL or greater, the delivered radioactivity is in agreement with the requested amount as measured independently with a dose calibrator to within 2 % on average. In conclusion, the integration of the detector and pump in an in-line system leads to a flexible and compact approach that can accurately dispense solutions containing radioactivity concentrations ranging from the high values typical of [18F]fluoride directly produced from a cyclotron (~0.1-1 mCi μL -1) to the low values typical of batches of [18F]fluoride-labeled radiotracers intended for preclinical mouse scans (~1-10 μCi μL -1).« less

  5. Teacher Evaluation and School Improvement: An Analysis of the Evidence

    ERIC Educational Resources Information Center

    Hallinger, Philip; Heck, Ronald H.; Murphy, Joseph

    2014-01-01

    In recent years, substantial investments have been made in reengineering systems of teacher evaluation. The new generation models of teacher evaluation typically adopt a standards-based view of teaching quality and include a value-added measure of growth in student learning. With more than a decade of experience and research, it is timely to…

  6. Magnetic Field "Flyby" Measurement Using a Smartphone's Magnetometer and Accelerometer Simultaneously

    ERIC Educational Resources Information Center

    Monteiro, Martin; Stari, Cecilia; Cabeza, Cecilia; Marti, Arturo C.

    2017-01-01

    The spatial dependence of magnetic fields in simple configurations is a common topic in introductory electromagnetism lessons, both in high school and in university courses. In typical experiments, magnetic fields and distances are obtained taking point-by-point values using a Hall sensor and a ruler, respectively. Here, we show how to take…

  7. 60-Hz electric and magnetic fields generated by a distribution network.

    PubMed

    Héroux, P

    1987-01-01

    From a mobile unit, 60-Hz electric and magnetic fields generated by Hydro-Québec's distribution network were measured. Nine runs, representative of various human environments, were investigated. Typical values were 32 V/m and 0.16 microT. The electrical distribution networks investigated were major contributors to the electric and magnetic environments.

  8. Comparing Two Approaches to the Rate of Return to Investment in Education

    ERIC Educational Resources Information Center

    Kara, Orhan

    2010-01-01

    The economic value of investment in education has typically been measured by its rate of return, frequently estimated by the internal rate of return or the earning function approach. Given the importance of the rate of return estimates for individuals and countries, especially developing countries, in making decision on educational investment, we…

  9. Ecological risk assessment to support fuels treatment project decisions

    Treesearch

    Jay O' Laughlin

    2010-01-01

    Risk is a combined statement of the probability that something of value will be damaged and some measure of the damage’s adverse effect. Wildfires burning in the uncharacteristic fuel conditions now typical throughout the Western United States can damage ecosystems and adversely affect environmental conditions. Wildfire behavior can be modified by prefire fuel...

  10. Patients’ perceived value of pharmacy quality measures: a mixed-methods study

    PubMed Central

    Shiyanbola, Olayinka O; Mort, Jane R

    2015-01-01

    Objective To describe patients’ perceived value and use of quality measures in evaluating and choosing community pharmacies. Design Focus group methodology was combined with a survey tool. During the focus groups, participants assessed the value of the Pharmacy Quality Alliance's quality measures in evaluating and choosing a pharmacy. Also, participants completed questionnaires rating their perceived value of quality measures in evaluating a pharmacy (1 being low value and 5 being high) or choosing a pharmacy (yes/no). Thematic analysis and descriptive statistics were used to analyse the focus groups and surveys, respectively. Setting Semistructured focus groups were conducted in a private meeting space of an urban and a rural area of a Mid-western State in the USA. Participants Thirty-four adults who filled prescription medications in community pharmacies for a chronic illness were recruited in community pharmacies, senior centres and public libraries. Results While comments indicated that all measures were important, medication safety measures (eg, drug-drug interactions) were valued more highly than others. Rating of quality measure utility in evaluating a pharmacy ranged from a mean of 4.88 (‘drug-drug interactions’) to a mean of 4.0 (‘absence of controller therapy for patients with asthma’). Patients were hesitant to use quality information in choosing a pharmacy (depending on the participant's location) but might consider if moving to a new area or having had a negative pharmacy experience. Use of select quality measures to choose a pharmacy ranged from 97.1% of participants using ‘drug-drug interactions’ (medication safety measure) to 55.9% using ‘absence of controller therapy for patients with asthma’. Conclusions The study participants valued quality measures in evaluating and selecting a community pharmacy, with medication safety measures valued highest. The participants reported that the quality measures would not typically cause a switch in pharmacy but might influence their selection in certain situations. PMID:25600253

  11. How Knowledge Organisations Work: The Case of Software Firms

    ERIC Educational Resources Information Center

    Gottschalk, Petter

    2007-01-01

    Knowledge workers in software firms solve client problems in sequential and cyclical work processes. Sequential and cyclical work takes place in the value configuration of a value shop. While typical examples of value chains are manufacturing industries such as paper and car production, typical examples of value shops are law firms and medical…

  12. Ground-based determination of atmospheric radiance for correction of ERTS-1 data

    NASA Technical Reports Server (NTRS)

    Peacock, K.

    1974-01-01

    A technique is described for estimating the atmospheric radiance observed by a downward sensor (ERTS) using ground-based measurements. A formula is obtained for the sky radiance at the time of the ERTS overpass from the radiometric measurement of the sky radiance made at a particular solar zenith angle and air mass. A graph illustrates ground-based sky radiance measurements as a function of the scattering angle for a range of solar air masses. Typical values for sky radiance at a solar zenith angle of 48 degrees are given.

  13. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  14. Thermal Infrared Spectrometer for Earth Science Remote Sensing Applications—Instrument Modifications and Measurement Procedures

    PubMed Central

    Hecker, Christoph; Hook, Simon; van der Meijde, Mark; Bakker, Wim; van der Werff, Harald; Wilbrink, Henk; van Ruitenbeek, Frank; de Smeth, Boudewijn; van der Meer, Freek

    2011-01-01

    In this article we describe a new instrumental setup at the University of Twente Faculty ITC with an optimized processing chain to measure absolute directional-hemispherical reflectance values of typical earth science samples in the 2.5 to 16 μm range. A Bruker Vertex 70 FTIR spectrometer was chosen as the base instrument. It was modified with an external integrating sphere with a 30 mm sampling port to allow measuring large, inhomogeneous samples and quantitatively compare the laboratory results to airborne and spaceborne remote sensing data. During the processing to directional-hemispherical reflectance values, a background radiation subtraction is performed, removing the effect of radiance not reflected from the sample itself on the detector. This provides more accurate reflectance values for low-reflecting samples. Repeat measurements taken over a 20 month period on a quartz sand standard show that the repeatability of the system is very high, with a standard deviation ranging between 0.001 and 0.006 reflectance units depending on wavelength. This high level of repeatability is achieved even after replacing optical components, re-aligning mirrors and placement of sample port reducers. Absolute reflectance values of measurements taken by the instrument here presented compare very favorably to measurements of other leading laboratories taken on identical sample standards. PMID:22346683

  15. Thermal infrared spectrometer for Earth science remote sensing applications-instrument modifications and measurement procedures.

    PubMed

    Hecker, Christoph; Hook, Simon; van der Meijde, Mark; Bakker, Wim; van der Werff, Harald; Wilbrink, Henk; van Ruitenbeek, Frank; de Smeth, Boudewijn; van der Meer, Freek

    2011-01-01

    In this article we describe a new instrumental setup at the University of Twente Faculty ITC with an optimized processing chain to measure absolute directional-hemispherical reflectance values of typical earth science samples in the 2.5 to 16 μm range. A Bruker Vertex 70 FTIR spectrometer was chosen as the base instrument. It was modified with an external integrating sphere with a 30 mm sampling port to allow measuring large, inhomogeneous samples and quantitatively compare the laboratory results to airborne and spaceborne remote sensing data. During the processing to directional-hemispherical reflectance values, a background radiation subtraction is performed, removing the effect of radiance not reflected from the sample itself on the detector. This provides more accurate reflectance values for low-reflecting samples. Repeat measurements taken over a 20 month period on a quartz sand standard show that the repeatability of the system is very high, with a standard deviation ranging between 0.001 and 0.006 reflectance units depending on wavelength. This high level of repeatability is achieved even after replacing optical components, re-aligning mirrors and placement of sample port reducers. Absolute reflectance values of measurements taken by the instrument here presented compare very favorably to measurements of other leading laboratories taken on identical sample standards.

  16. Ten Years of Black Carbon Measurements in the North Atlantic at the Pico Mountain Observatory, Azores (2225m asl)

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Fialho, P. J.; Mazzoleni, L. R.; Olsen, S. C.; Owen, R. C.; Helmig, D.; Hueber, J.; Dziobak, M.; Kramer, L. J.; Mazzoleni, C.

    2012-12-01

    The Pico Mountain Observatory is located in the summit caldera of the Pico mountain, an inactive volcano on the Pico Island in the Azores, Portugal (38.47°N, 28.40°W, Altitude 2225m asl). The Azores are often impacted by polluted outflows from the North American continent and local sources have been shown to have a negligible influence at the observatory. The value of the station stems from the fact that this is the only permanent mountaintop monitoring station in the North Atlantic that is typically located above the marine boundary layer (average MBL heights are below 1200 m and rarely exceed 1300 m) and often receives air characteristic of the lower free troposphere. Measurements of black carbon (BC) mass have been carried out at the station since 2001, mostly in the summer seasons. Here we discuss the BC decadal dataset (2001-2011) collected at the site by using a seven-wavelength AE31 Magee Aethalometer. Measured BC mass and computed Angstrom exponent (AE) values were analysed to study seasonal and diurnal variations. There was a large day-to-day variability in the BC values due to varied meteorological conditions that resulted in different diurnal patterns for different months. The daily mean BC at this location ranged between 0 and ~430 ngm-3, with the most frequently occurring value in the range 0-100 ngm-3. The overall mean for the 10 year period is ~24 ngm-3, with a coefficient of variation of 150%. The BC values exhibited a consistent annual trend being low in winter months and high in summer months, barring year to year variations. To differentiate between BC and other absorbing particles, we analyzed the wavelength dependence of aerosol absorption coefficient and determined a best-fit exponent i.e., the Ångström exponent, for the whole dataset. Visible Ångström exponent (AE: 470-520-590-660 nm) values ranged between 0 and 3.5, with most frequently occurring values in the range 0.85 to 1.25. By making use of the aethalometer light attenuation measurements at different wavelengths and Hysplit back trajectories, we divided the data into two categories. One for periods characterized by AE values close to 1; these periods are typically correlated with back trajectories originating from Canada, North America or northern Europe, indicating the dominance of BC on the light attenuation. Another characterized by AE values substantially different from 1; these periods correlated with back trajectories originating from dust-prone regions (e.g., the Sahara desert).The above measurements, with the aid of ancillary satellite and ground-based measurements will be employed in estimating the radiaitve effects of BC in the North Atlantic.ico Mountain Observatory

  17. Ordovician Jeleniów Claystone Formation of the Holy Cross Mountains, Poland - Reconstruction of Redox Conditions Using Pyrite Framboid Study

    NASA Astrophysics Data System (ADS)

    Smolarek, Justyna; Marynowski, Leszek; Trela, Wiesław

    2014-09-01

    The aim of this research is to reconstruct palaeoredox conditions during sedimentation of the Jeleniów Claystone Formation deposits, using framboid pyrite diameter measurements. Analysis of pyrite framboids diameter distribution is an effective method in the palaeoenvironmental interpretation which allow for a more detailed insight into the redox conditions, and thus the distinction between euxinic, dysoxic and anoxic conditions. Most of the samples is characterized by framboid indicators typical for anoxic/euxinic conditions in the water column, with average (mean) values ranging from 5.29 to 6.02 urn and quite low standard deviation (SD) values ranging from 1.49 to 3.0. The remaining samples have shown slightly higher values of framboid diameter typical for upper dysoxic conditions, with average values (6.37 to 7.20 um) and low standard deviation (SD) values (1.88 to 2.88). From the depth of 75.5 m till the shallowest part of the Jeleniów Claystone Formation, two samples have been examined and no framboids has been detected. Because secondary weathering should be excluded, the lack of framboids possibly indicates oxic conditions in the water column. Oxic conditions continue within the Wólka Formation based on the lack of framboids in the ZB 51.6 sample.

  18. Ordovician Jeleniów Claystone Formation of the Holy Cross Mountains, Poland - Reconstruction of redox conditions using pyrite framboid study

    NASA Astrophysics Data System (ADS)

    Smolarek, Justyna; Marynowski, Leszek; Trela, Wiesław

    2014-09-01

    The aim of this research is to reconstruct palaeoredox conditions during sedimentation of the Jeleniów Claystone Formation deposits, using framboid pyrite diameter measurements. Analysis of pyrite framboids diameter distribution is an effective method in the palaeoenvironmental interpretation which allow for a more detailed insight into the redox conditions, and thus the distinction between euxinic, dysoxic and anoxic conditions. Most of the samples is characterized by framboid indicators typical for anoxic/euxinic conditions in the water column, with average (mean) values ranging from 5.29 to 6.02 μm and quite low standard deviation (SD) values ranging from 1.49 to 3.0. The remaining samples have shown slightly higher values of framboid diameter typical for upper dysoxic conditions, with average values (6.37 to 7.20 μm) and low standard deviation (SD) values (1.88 to 2.88). From the depth of 75.5 m till the shallowest part of the Jeleniów Claystone Formation, two samples have been examined and no framboids has been detected. Because secondary weathering should be excluded, the lack of framboids possibly indicates oxic conditions in the water column. Oxic conditions continue within the Wólka Formation based on the lack of framboids in the ZB 51.6 sample

  19. Concentrations and characteristics of organic carbon in surface water in Arizona: Influence of urbanization

    USGS Publications Warehouse

    Westerhoff, P.; Anning, D.

    2000-01-01

    Dissolved (DOC) and total (TOC) organic carbon concentrations and compositions were studied for several river systems in Arizona, USA. DOC composition was characterized by ultraviolet and visible absorption and fluorescence emission (excitation wavelength of 370 nm) spectra characteristics. Ephemeral sites had the highest DOC concentrations, and unregulated perennial sites had lower concentrations than unregulated intermittent sites, regulated sites, and sites downstream from wastewater-treatment plants (p < 0.05). Reservoir outflows and wastewater-treatment plant effluent were higher in DOC concentration (p < 0.05) and exhibited less variability in concentration than inflows to the reservoirs. Specific ultraviolet absorbance values at 254 nm were typically less than 2 m-1(milligram DOC per liter)-1 and lower than values found in most temperate-region rivers, but specific ultraviolet absorbance values increased during runoff events. Fluorescence measurements indicated that DOC in desert streams typically exhibit characteristics of autochthonous sources; however, DOC in unregulated upland rivers and desert streams experienced sudden shifts from autochthonous to allochthonous sources during runoff events. The urban water system (reservoir systems and wastewater-treatment plants) was found to affect temporal variability in DOC concentration and composition. (C) 2000 Elsevier Science B.V.Dissolved (DOC) and total (TOC) organic carbon concentrations and compositions were studied for several river systems in Arizona, USA. DOC composition was characterized by ultraviolet and visible absorption and fluorescence emission (excitation wavelength of 370 nm) spectra characteristics. Ephemeral sites had the highest DOC concentrations, and unregulated perennial sites had lower concentrations than unregulated intermittent sites, regulated sites, and sites downstream from wastewater-treatment plants (p<0.05). Reservoir outflows and wastewater-treatment plant effluent were higher in DOC concentration (p<0.05) and exhibited less variability in concentration than inflows to the reservoirs. Specific ultraviolet absorbance values at 254 nm were typically less than 2 m-1(milligram DOC per liter)-1 and lower than values found in most temperate-region rivers, but specific ultraviolet absorbance values increased during runoff events. Fluorescence measurements indicated that DOC in desert streams typically exhibit characteristics of autochthonous sources; however, DOC in unregulated upland rivers and desert streams experienced sudden shifts from autochthonous to allochthonous sources during runoff events. The urban water system (reservoir systems and wastewater-treatment plants) was found to affect temporal variability in DOC concentration and composition.The influence of urbanization, becoming increasingly common in arid regions, on dissolved organic carbon (DOC) concentrations in surface water resources was studied. DOC concentration and composition, seasonal watershed runoff events, streamflow variations, water management practices, and urban infrastructure in several Arizona watersheds were monitored. Ephemeral sites had the highest DOC levels, and unregulated perennial sites and lower concentrations than unregulated intermittent sites, regulated sites, and sites downstream from wastewater treatment plants. Reservoir outflows and wastewater treatment plant effluent had higher and less variable DOC concentrations than inflows to reservoirs. UV absorbance values, fluorescence measurements, and other indicators suggest that urban water systems (reservoirs and wastewater treatment plants) affect temporal variability in DOC concentration and composition.

  20. Evaluation of a surface/vegetation parameterization using satellite measurements of surface temperature

    NASA Technical Reports Server (NTRS)

    Taconet, O.; Carlson, T.; Bernard, R.; Vidal-Madjar, D.

    1986-01-01

    Ground measurements of surface-sensible heat flux and soil moisture for a wheat-growing area of Beauce in France were compared with the values derived by inverting two boundary layer models with a surface/vegetation formulation using surface temperature measurements made from NOAA-AVHRR. The results indicated that the trends in the surface heat fluxes and soil moisture observed during the 5 days of the field experiment were effectively captured by the inversion method using the remotely measured radiative temperatures and either of the two boundary layer methods, both of which contain nearly identical vegetation parameterizations described by Taconet et al. (1986). The sensitivity of the results to errors in the initial sounding values or measured surface temperature was tested by varying the initial sounding temperature, dewpoint, and wind speed and the measured surface temperature by amounts corresponding to typical measurement error. In general, the vegetation component was more sensitive to error than the bare soil model.

  1. Developing the Federal Aviation Administration’s Requirements for Color Use in Air Traffic Control Displays

    DTIC Science & Technology

    2007-05-01

    colorimeter . The relationship between rgb and xyL values can be specified with a nonlinear transformation and a linear matrix transformation, as described by...situation where users 1) know the rgb values of the colors, 2) have a colorimeter or luminance meter to measure luminance, 3) are capable of generating...the center of the screen while keeping the rest of the screen black (specified by r=g=b=0). Step 1A.3: Hold the colorimeter at users’ typical view

  2. Preamplifiers for non-contact capacitive biopotential measurements*

    PubMed Central

    Peng, GuoChen; Ignjatovic, Zeljko; Bocko, Mark F.

    2014-01-01

    Non-contact biopotential sensing is an attractive measurement strategy for a number of health monitoring applications, primarily the ECG and the EEG. In all such applications a key technical challenge is the design of a low-noise trans-impedance preamplifier for the typically low-capacitance, high source impedance sensing electrodes. In this paper, we compare voltage and charge amplifier designs in terms of their common mode rejection ratio, noise performance, and frequency response. Both amplifier types employ the same operational-transconductance amplifier (OTA), which was fabricated in a 0.35um CMOS process. The results show that a charge amplifier configuration has advantages for small electrode-to-subject coupling capacitance values (less than 10 pF - typical of noncontact electrodes) and that the voltage amplifier configuration has advantages for electrode capacitances above 10 pF. PMID:24109979

  3. Characterization of Window Functions for Regularization of Electrical Capacitance Tomography Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Peng, Lihui; Xiao, Deyun

    2007-06-01

    This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.

  4. Optimal Threshold Determination for Interpreting Semantic Similarity and Particularity: Application to the Comparison of Gene Sets and Metabolic Pathways Using GO and ChEBI

    PubMed Central

    Bettembourg, Charles; Diot, Christian; Dameron, Olivier

    2015-01-01

    Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274

  5. Comparison of modeled and typical meteorological year. Diffuse, direct, and tilted solar radiation values with measured data in a cloudy climate: Seattle-Tacoma data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straub, D.; Baylon, D.; Smith, O.

    1980-01-01

    Four commonly used solar radiation models that determine the diffuse and direct components of the solar radiation on a horizontal surface are compared against measured data to determine their predictive and modeling applicability. The John Hay model is determined to underpredict the diffuse and the Pereira/Rabl model to overpredict the diffuse radiation. The daily Liu and Jordan correlation and the hourly Boes correlation are shown to be better predictors.

  6. Measurement of Transmission Loss Using an Inexpensive Mobile Source on the Upper Slope of the South China Sea

    DTIC Science & Technology

    2015-09-01

    reduction of SPL in dB as sound travels from a source to a receiver ( Urick 1983). The basic equation to obtain TL from measurements in a tonal transmission...attributed to the sum of losses due to spreading, multipath effects, scattering, and attenuation ( Urick 1983). Typical values for TL in different areas...executive.com/article/us-toughens- south-china-sea-stance.] Urick , R. J., 1983: Principles of Underwater Sound. 3rd ed. Peninsula Publishing, 423 pp. 32

  7. Towards metering tap water by Lorentz force velocimetry

    NASA Astrophysics Data System (ADS)

    Vasilyan, Suren; Ebert, Reschad; Weidner, Markus; Rivero, Michel; Halbedel, Bernd; Resagk, Christian; Fröhlich, Thomas

    2015-11-01

    In this paper, we present enhanced flow rate measurement by applying the contactless Lorentz Force Velocimetry (LFV) technique. Particularly, we show that the LFV is a feasible technique for metering the flow rate of salt water in a rectangular channel. The measurements of the Lorentz forces as a function of the flow rate are presented for different electrical conductivities of the salt water. The smallest value of conductivity is achieved at 0.06 S·m-1, which corresponds to the typical value of tap water. In comparison with previous results, the performance of LFV is improved by approximately 2 orders of magnitude by means of a high-precision differential force measurement setup. Furthermore, the sensitivity curve and the calibration factor of the flowmeter are provided based on extensive measurements for the flow velocities ranging from 0.2 to 2.5 m·s-1 and conductivities ranging from 0.06 to 10 S·m-1.

  8. Implications from the Upper Limit of Radio Afterglow Emission of FRB 131104/Swift J0644.5-5111

    NASA Astrophysics Data System (ADS)

    Gao, He; Zhang, Bing

    2017-02-01

    A γ-ray transient, Swift J0644.5-5111, has been claimed to be associated with FRB 131104. However, a long-term radio imaging follow-up observation only placed an upper limit on the radio afterglow flux of Swift J0644.5-5111. Applying the external shock model, we perform a detailed constraint on the afterglow parameters for the FRB 131104/Swift J0644.5-5111 system. We find that for the commonly used microphysics shock parameters (e.g., {ɛ }e=0.1, {ɛ }B=0.01, and p = 2.3), if the fast radio burst (FRB) is indeed cosmological as inferred from its measured dispersion measure (DM), the ambient medium number density should be ≤slant {10}-3 {{cm}}-3, which is the typical value for a compact binary merger environment but disfavors a massive star origin. Assuming a typical ISM density, one would require that the redshift of the FRB be much smaller than the value inferred from DM (z\\ll 0.1), implying a non-cosmological origin of DM. The constraints are much looser if one adopts smaller {ɛ }B and {ɛ }e values, as observed in some gamma-ray burst afterglows. The FRB 131104/Swift J0644.5-5111 association remains plausible. We critically discuss possible progenitor models for the system.

  9. Lower limb muscle volume estimation from maximum cross-sectional area and muscle length in cerebral palsy and typically developing individuals.

    PubMed

    Vanmechelen, Inti M; Shortland, Adam P; Noble, Jonathan J

    2018-01-01

    Deficits in muscle volume may be a significant contributor to physical disability in young people with cerebral palsy. However, 3D measurements of muscle volume using MRI or 3D ultrasound may be difficult to make routinely in the clinic. We wished to establish whether accurate estimates of muscle volume could be made from a combination of anatomical cross-sectional area and length measurements in samples of typically developing young people and young people with bilateral cerebral palsy. Lower limb MRI scans were obtained from the lower limbs of 21 individuals with cerebral palsy (14.7±3years, 17 male) and 23 typically developing individuals (16.8±3.3years, 16 male). The volume, length and anatomical cross-sectional area were estimated from six muscles of the left lower limb. Analysis of Covariance demonstrated that the relationship between the length*cross-sectional area and volume was not significantly different depending on the subject group. Linear regression analysis demonstrated that the product of anatomical cross-sectional area and length bore a strong and significant relationship to the measured muscle volume (R 2 values between 0.955 and 0.988) with low standard error of the estimates of 4.8 to 8.9%. This study demonstrates that muscle volume may be estimated accurately in typically developing individuals and individuals with cerebral palsy by a combination of anatomical cross-sectional area and muscle length. 2D ultrasound may be a convenient method of making these measurements routinely in the clinic. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Development of the Mini-Assisting Hand Assessment: evidence for content and internal scale validity.

    PubMed

    Greaves, Susan; Imms, Christine; Dodd, Karen; Krumlinde-Sundholm, Lena

    2013-11-01

    To describe the development of the Mini-Assisting Hand Assessment (Mini-AHA) for children with signs of unilateral cerebral palsy (CP) aged 8 to 18 months, and evaluate aspects of content and internal scale validity. The ability of the video-recorded Mini-AHA play session to provoke bimanual performance in children with unilateral CP and typical development was evaluated. Original AHA test items were examined for their suitability for younger children and possible new items were generated. Data from 108 assessments of children with unilateral CP (86 children, 53 males, 33 females; mean age 13 mo, SD 3 mo, range 8-18 mo) were entered into a Rasch measurement model analysis to evaluate internal scale validity. A Spearman's correlation analysis explored the relationship between age and ability measures for children with unilateral CP. The frequency of maximum scores in 40 children with typical development (22 males, 18 females; mean age 12 mo, SD 3 mo) was examined. The Mini-AHA play session provoked bimanual responses in typically developing children 99% of the time. Person and item fit criteria established 20 items for the scale. The resultant unidimensional scale also demonstrated excellent discriminative features through high separation reliability. The item calibration values covered the range of person ability measures well. Age was not related to the ability measures for children with unilateral CP (rs =0.178). All children with typical development achieved maximum scores. Accumulated evidence shows that the Mini-AHA validly measures use of the affected hand during bimanual performance for children with unilateral CP aged 8 to 18 months. The Mini-AHA has the potential to be a useful assessment to evaluate functional hand use and the effects of intervention in an age group when potential for change is high. © 2013 Mac Keith Press.

  11. Image Texture Predicts Avian Density and Species Richness

    PubMed Central

    Wood, Eric M.; Pidgeon, Anna M.; Radeloff, Volker C.; Keuler, Nicholas S.

    2013-01-01

    For decades, ecologists have measured habitat attributes in the field to understand and predict patterns of animal distribution and abundance. However, the scale of inference possible from field measured data is typically limited because large-scale data collection is rarely feasible. This is problematic given that conservation and management typical require data that are fine grained yet broad in extent. Recent advances in remote sensing methodology offer alternative tools for efficiently characterizing wildlife habitat across broad areas. We explored the use of remotely sensed image texture, which is a surrogate for vegetation structure, calculated from both an air photo and from a Landsat TM satellite image, compared with field-measured vegetation structure, characterized by foliage-height diversity and horizontal vegetation structure, to predict avian density and species richness within grassland, savanna, and woodland habitats at Fort McCoy Military Installation, Wisconsin, USA. Image texture calculated from the air photo best predicted density of a grassland associated species, grasshopper sparrow (Ammodramus savannarum), within grassland habitat (R2 = 0.52, p-value <0.001), and avian species richness among habitats (R2 = 0.54, p-value <0.001). Density of field sparrow (Spizella pusilla), a savanna associated species, was not particularly well captured by either field-measured or remotely sensed vegetation structure variables, but was best predicted by air photo image texture (R2 = 0.13, p-value = 0.002). Density of ovenbird (Seiurus aurocapillus), a woodland associated species, was best predicted by pixel-level satellite data (mean NDVI, R2 = 0.54, p-value <0.001). Surprisingly and interestingly, remotely sensed vegetation structure measures (i.e., image texture) were often better predictors of avian density and species richness than field-measured vegetation structure, and thus show promise as a valuable tool for mapping habitat quality and characterizing biodiversity across broad areas. PMID:23675463

  12. Systematic errors in the determination of the spectroscopic g-factor in broadband ferromagnetic resonance spectroscopy: A proposed solution

    NASA Astrophysics Data System (ADS)

    Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.

    2018-01-01

    A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.

  13. Evaluation of spacecraft technology programs (effects on communication satellite business ventures), volume 1

    NASA Technical Reports Server (NTRS)

    Greenburg, J. S.; Gaelick, C.; Kaplan, M.; Fishman, J.; Hopkins, C.

    1985-01-01

    Commercial organizations as well as government agencies invest in spacecraft (S/C) technology programs that are aimed at increasing the performance of communications satellites. The value of these programs must be measured in terms of their impacts on the financial performane of the business ventures that may ultimately utilize the communications satellites. An economic evaluation and planning capability was developed and used to assess the impact of NASA on-orbit propulsion and space power programs on typical fixed satellite service (FSS) and direct broadcast service (DBS) communications satellite business ventures. Typical FSS and DBS spin and three-axis stabilized spacecraft were configured in the absence of NASA technology programs. These spacecraft were reconfigured taking into account the anticipated results of NASA specified on-orbit propulsion and space power programs. In general, the NASA technology programs resulted in spacecraft with increased capability. The developed methodology for assessing the value of spacecraft technology programs in terms of their impact on the financial performance of communication satellite business ventures is described. Results of the assessment of NASA specified on-orbit propulsion and space power technology programs are presented for typical FSS and DBS business ventures.

  14. Evaluation of spacecraft technology programs (effects on communication satellite business ventures), volume 1

    NASA Astrophysics Data System (ADS)

    Greenburg, J. S.; Gaelick, C.; Kaplan, M.; Fishman, J.; Hopkins, C.

    1985-09-01

    Commercial organizations as well as government agencies invest in spacecraft (S/C) technology programs that are aimed at increasing the performance of communications satellites. The value of these programs must be measured in terms of their impacts on the financial performane of the business ventures that may ultimately utilize the communications satellites. An economic evaluation and planning capability was developed and used to assess the impact of NASA on-orbit propulsion and space power programs on typical fixed satellite service (FSS) and direct broadcast service (DBS) communications satellite business ventures. Typical FSS and DBS spin and three-axis stabilized spacecraft were configured in the absence of NASA technology programs. These spacecraft were reconfigured taking into account the anticipated results of NASA specified on-orbit propulsion and space power programs. In general, the NASA technology programs resulted in spacecraft with increased capability. The developed methodology for assessing the value of spacecraft technology programs in terms of their impact on the financial performance of communication satellite business ventures is described. Results of the assessment of NASA specified on-orbit propulsion and space power technology programs are presented for typical FSS and DBS business ventures.

  15. Year rather than farming system influences protein utilization and energy value of vegetables when measured in a rat model.

    PubMed

    Jørgensen, Henry; Brandt, Kirsten; Lauridsen, Charlotte

    2008-12-01

    The aim of the study was to measure protein utilization and energy value of dried apple, carrot, kale, pea, and potato prepared for human consumption and grown in 2 consecutive years with 3 different farming systems: (1) low input of fertilizer without pesticides (LIminusP), (2) low input of fertilizers and high input of pesticides (LIplusP), (3) and high input of fertilizers and high input of pesticides (HIplusP). In addition, the study goal was to verify the nutritional values, taking into consideration the physiologic state. In experiment 1, the nutritive values, including protein digestibility-corrected amino acid score, were determined in single ingredients in trials with young rats (3-4 weeks) as recommended by the Food and Agriculture Organization of the United Nations/World Health Organization for all age groups. A second experiment was carried out with adult rats to assess the usefulness of digestibility values to predict the digestibility and nutritive value of mixed diets and study the age aspect. Each plant material was included in the diet with protein-free basal mixtures or casein to contain 10% dietary protein. The results showed that variations in protein utilization and energy value determined on single ingredients between cultivation strategies were inconsistent and smaller than between harvest years. Overall, dietary crude fiber was negatively correlated with energy digestibility. The energy value of apple, kale, and pea was lower than expected from literature values. A mixture of plant ingredients fed to adult rats showed lower protein digestibility and higher energy digestibility than predicted. The protein digestibility data obtained using young rats in the calculation of protein digestibility-corrected amino acid score overestimates protein digestibility and quality and underestimates energy value for mature rats. The present study provides new data on protein utilization and energy digestibility of some typical plant foods that may contribute new information for databases on food quality. Growing year but not cultivation system influenced the protein quality and energy value of the vegetables and fruit typical for human consumption.

  16. Development of Porosity Measurement Method in Shale Gas Reservoir Rock

    NASA Astrophysics Data System (ADS)

    Siswandani, Alita; Nurhandoko, BagusEndar B.

    2016-08-01

    The pore scales have impacts on transport mechanisms in shale gas reservoirs. In this research, digital helium porosity meter is used for porosity measurement by considering real condition. Accordingly it is necessary to obtain a good approximation for gas filled porosity. Shale has the typical effective porosity that is changing as a function of time. Effective porosity values for three different shale rocks are analyzed by this proposed measurement. We develop the new measurement method for characterizing porosity phenomena in shale gas as a time function by measuring porosity in a range of minutes using digital helium porosity meter. The porosity of shale rock measured in this experiment are free gas and adsorbed gas porosoty. The pressure change in time shows that porosity of shale contains at least two type porosities: macro scale porosity (fracture porosity) and fine scale porosity (nano scale porosity). We present the estimation of effective porosity values by considering Boyle-Gay Lussaac approximation and Van der Waals approximation.

  17. Sampling Utterances and Grammatical Analysis Revised (SUGAR): New Normative Values for Language Sample Analysis Measures

    ERIC Educational Resources Information Center

    Pavelko, Stacey L.; Owens, Robert E., Jr.

    2017-01-01

    Purpose: The purpose of this study was to document whether mean length of utterance (MLU[subscript S]), total number of words (TNW), clauses per sentence (CPS), and/or words per sentence (WPS) demonstrated age-related changes in children with typical language and to document the average time to collect, transcribe, and analyze conversational…

  18. Advances in threat assessment and their application to forest and rangeland management—Volume 2

    Treesearch

    H. Michael Rauscher; Yasmeen Sands; Danny C. Lee; Jerome S. Beatty

    2010-01-01

    Risk is a combined statement of the probability that something of value will be damaged and some measure of the damage’s adverse effect. Wildfires burning in the uncharacteristic fuel conditions now typical throughout the Western United States can damage ecosystems and adversely affect environmental conditions. Wildfire behavior can be modified by prefire fuel...

  19. Balloon borne Antarctic frost point measurements and their impact on polar stratospheric cloud theories

    NASA Technical Reports Server (NTRS)

    Rosen, James M.; Hofmann, D. J.; Carpenter, J. R.; Harder, J. W.; Oltmans, S. J.

    1988-01-01

    The first balloon-borne frost point measurements over Antarctica were made during September and October, 1987 as part of the NOZE 2 effort at McMurdo. The results indicate water vapor mixing ratios on the order of 2 ppmv in the 15 to 20 km region which is somewhat smaller than the typical values currently being used significantly smaller than the typical values currently being used in polar stratospheric cloud (PSC) theories. The observed water vapor mixing ratio would correspond to saturated conditions for what is thought to be the lowest stratospheric temperatures encountered over the Antarctic. Through the use of available lidar observations there appears to be significant evidence that some PSCs form at temperatures higher than the local frost point (with respect to water) in the 10 to 20 km region thus supporting the nitric acid theory of PSC composition. Clouds near 15 km and below appear to form in regions saturated with respect to water and thus are probably mostly ice water clouds although they could contain relatively small amounts of other constituents. Photographic evidence suggests that the clouds forming above the frost point probably have an appearance quite different from the lower altitude iridescent, colored nacreous clouds.

  20. Generation and evaluation of typical meteorological year datasets for greenhouse and external conditions on the Mediterranean coast.

    PubMed

    Fernández, M D; López, J C; Baeza, E; Céspedes, A; Meca, D E; Bailey, B

    2015-08-01

    A typical meteorological year (TMY) represents the typical meteorological conditions over many years but still contains the short term fluctuations which are absent from long-term averaged data. Meteorological data were measured at the Experimental Station of Cajamar 'Las Palmerillas' (Cajamar Foundation) in Almeria, Spain, over 19 years at the meteorological station and in a reference greenhouse which is typical of those used in the region. The two sets of measurements were subjected to quality control analysis and then used to create TMY datasets using three different methodologies proposed in the literature. Three TMY datasets were generated for the external conditions and two for the greenhouse. They were assessed by using each as input to seven horticultural models and comparing the model results with those obtained by experiment in practical trials. In addition, the models were used with the meteorological data recorded during the trials. A scoring system was used to identify the best performing TMY in each application and then rank them in overall performance. The best methodology was that of Argiriou for both greenhouse and external conditions. The average relative errors between the seasonal values estimated using the 19-year dataset and those using the Argiriou greenhouse TMY were 2.2 % (reference evapotranspiration), -0.45 % (pepper crop transpiration), 3.4 % (pepper crop nitrogen uptake) and 0.8 % (green bean yield). The values obtained using the Argiriou external TMY were 1.8 % (greenhouse reference evapotranspiration), 0.6 % (external reference evapotranspiration), 4.7 % (greenhouse heat requirement) and 0.9 % (loquat harvest date). Using the models with the 19 individual years in the historical dataset showed that the year to year weather variability gave results which differed from the average values by ± 15 %. By comparison with results from other greenhouses it was shown that the greenhouse TMY is applicable to greenhouses which have a solar radiation transmission of approximately 65 % and rely on manual control of ventilation which constitute the majority in the south-east of Spain and in most Mediterranean greenhouse areas.

  1. Generation and evaluation of typical meteorological year datasets for greenhouse and external conditions on the Mediterranean coast

    NASA Astrophysics Data System (ADS)

    Fernández, M. D.; López, J. C.; Baeza, E.; Céspedes, A.; Meca, D. E.; Bailey, B.

    2015-08-01

    A typical meteorological year (TMY) represents the typical meteorological conditions over many years but still contains the short term fluctuations which are absent from long-term averaged data. Meteorological data were measured at the Experimental Station of Cajamar `Las Palmerillas' (Cajamar Foundation) in Almeria, Spain, over 19 years at the meteorological station and in a reference greenhouse which is typical of those used in the region. The two sets of measurements were subjected to quality control analysis and then used to create TMY datasets using three different methodologies proposed in the literature. Three TMY datasets were generated for the external conditions and two for the greenhouse. They were assessed by using each as input to seven horticultural models and comparing the model results with those obtained by experiment in practical trials. In addition, the models were used with the meteorological data recorded during the trials. A scoring system was used to identify the best performing TMY in each application and then rank them in overall performance. The best methodology was that of Argiriou for both greenhouse and external conditions. The average relative errors between the seasonal values estimated using the 19-year dataset and those using the Argiriou greenhouse TMY were 2.2 % (reference evapotranspiration), -0.45 % (pepper crop transpiration), 3.4 % (pepper crop nitrogen uptake) and 0.8 % (green bean yield). The values obtained using the Argiriou external TMY were 1.8 % (greenhouse reference evapotranspiration), 0.6 % (external reference evapotranspiration), 4.7 % (greenhouse heat requirement) and 0.9 % (loquat harvest date). Using the models with the 19 individual years in the historical dataset showed that the year to year weather variability gave results which differed from the average values by ± 15 %. By comparison with results from other greenhouses it was shown that the greenhouse TMY is applicable to greenhouses which have a solar radiation transmission of approximately 65 % and rely on manual control of ventilation which constitute the majority in the south-east of Spain and in most Mediterranean greenhouse areas.

  2. Black-white differences in the economic value of improving health.

    PubMed

    Murphy, Kevin M; Topel, Robert H

    2005-01-01

    This article examines how differences in longevity over time and across groups add to the typical measures of economic progress and intergroup differentials. We focus on gains for and differences between groups defined both by race (black and white) and by gender, relying on willingness to pay as our measure of the economic value of gains in longevity. Measured at birth, the gains for white males between 1968 and 1998 were about 245,000 dollars per person, while the gains for black males were far larger, about 390,000 dollars per person. The gains for women were somewhat smaller, with white females gaining about 150,000 dollars per person and black females gaining about 305,000 dollars per person. Our estimates suggest that differences in income explain about 1/3 to 1/2 of the current black-white gap in longevity.

  3. Value Based Care and Patient-Centered Care: Divergent or Complementary?

    PubMed

    Tseng, Eric K; Hicks, Lisa K

    2016-08-01

    Two distinct but overlapping care philosophies have emerged in cancer care: patient-centered care (PCC) and value-based care (VBC). Value in healthcare has been defined as the quality of care (measured typically by healthcare outcomes) modified by cost. In this conception of value, patient-centeredness is one important but not necessarily dominant quality measure. In contrast, PCC includes multiple domains of patient-centeredness and places the patient and family central to all decisions and evaluations of quality. The alignment of PCC and VBC is complicated by several tensions, including a relative lack of patient experience and preference measures, and conceptions of cost that are payer-focused instead of patient-focused. Several strategies may help to align these two philosophies, including the use of patient-reported outcomes in clinical trials and value determinations, and the purposeful integration of patient preference in clinical decisions and guidelines. Innovative models of care, including accountable care organizations and oncology patient-centered medical homes, may also facilitate alignment through improved care coordination and quality-based payment incentives. Ultimately, VBC and PCC will only be aligned if patient-centered outcomes, perspectives, and preferences are explicitly incorporated into the definitions and metrics of quality, cost, and value that will increasingly influence the delivery of cancer care.

  4. Fault Identification Based on Nlpca in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping; Zhang, Jinfang

    2012-07-01

    The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.

  5. Small Aircraft RF Interference Path Loss Measurements

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Szatkowski, George N.; Mielnik, John J.; Salud, Maria Theresa P.

    2007-01-01

    Interference to aircraft radio receivers is an increasing concern as more portable electronic devices are allowed onboard. Interference signals are attenuated as they propagate from inside the cabin to aircraft radio antennas mounted on the outside of the aircraft. The attenuation level is referred to as the interference path loss (IPL) value. Significant published IPL data exists for transport and regional category airplanes. This report fills a void by providing data for small business/corporate and general aviation aircraft. In this effort, IPL measurements are performed on ten small aircraft of different designs and manufacturers. Multiple radio systems are addressed. Along with the typical worst-case coupling values, statistical distributions are also reported that could lead to more meaningful interference risk assessment.

  6. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  7. Phonological and acoustic bases for earliest grammatical category assignment: a cross-linguistic perspective.

    PubMed

    Shi, R; Morgan, J L; Allopenna, P

    1998-02-01

    Maternal infant-directed speech in Mandarin Chinese and Turkish (two mother-child dyads each; ages of children between 0;11 and 1;8) was examined to see if cues exist in input that might assist infants' assignment of words to lexical and functional item categories. Distributional, phonological, and acoustic measures were analysed. In each language, lexical and functional items (i.e. syllabic morphemes) differed significantly on numerous measures. Despite differences in mean values between categories, distributions of values typically displayed substantial overlap. However, simulations with self-organizing neural networks supported the conclusion that although individual dimensions had low cue validity, in each language multidimensional constellations of presyntactic cues are sufficient to guide assignment of words to rudimentary grammatical categories.

  8. Nitrogen oxides and ozone in the tropopause region of the Northern Hemisphere: Measurements from commercial aircraft in 1995/1996 and 1997

    NASA Astrophysics Data System (ADS)

    Brunner, Dominik; Staehelin, Johannes; Jeker, Dominique; Wernli, Heini; Schumann, Ulrich

    2001-11-01

    Measurements of nitrogen oxides (NO and NO2) and ozone (O3) were performed from a Swissair B-747 passenger aircraft in two extended time periods (May 1995 to May 1996, August to November 1997) in the framework of the Swiss NOXAR and the European POLINAT 2 project. The measurements were obtained on a total of 623 flights between Europe and destinations in the United States and the Far East. NO2 measurements were obtained only after December 1995 and were less precise than the NO measurements. Therefore daytime NO2 values were derived from measured NO and O3 concentrations assuming photostationary equilibrium. The completed NOx data set (measured NO, measured NO2 during night, and calculated NO2 during day) includes a complete annual cycle and is the most extensive and representative data set currently available for the upper troposphere (UT) and the lower stratosphere (LS) covering a significant proportion of the northern hemisphere between 15°N and 65°N. NOx concentrations in midlatitudes (30°-60°N) showed a marked seasonal variation both in the UT and the LS with a maximum in summer (median/mean values of 159/264 pptv in UT, 199/237 pptv in LS) and a minimum in winter (51/99 pptv in UT, 67/91 pptv in LS). Mean NOx concentrations were generally much higher than the respective median values, in particular in the UT, which reflects the important contribution from comparatively few very high concentrations observed in large-scale convection/lightning and small-scale aircraft plumes. Seasonal mean NOx concentrations in the UT were up to 3-4 times higher over continental regions than over the North Atlantic during summer. Lightning production of NO and convective vertical transport from the polluted boundary layer thus appear to have dominated the upper tropospheric NOx budget over these continental regions, particularly during summer. Ozone concentrations at aircraft cruising levels typically varied by an order of magnitude due to the strong vertical gradient in the LS. Seasonal mean values were dominated by large-scale dynamical processes controlling the altitude of the tropopause and the O3 abundance in the LS. O3 in the UT in midlatitudes showed a broad maximum between June and August, typical of observations in the free troposphere.

  9. Targeting overall equipment efficiency for small medium enterprises with irregular production system

    NASA Astrophysics Data System (ADS)

    Prasetyawan, Y.; Suef, M.; Claudia, L.; Handayani, F. D.

    2018-04-01

    Overall Equipment Effectiveness (OEE) is widely used to measure the maturity of a production system. The company will be considered as World Class Manufacturing if it reaches more than 85% value, with near perfect value for availability, performance and quality factor. This assessment is usually taken on industries with regular production times named shift system. A typical 8 hours shift system is used in OEE measurement and performance monitoring. There are few Small to Medium Enterprise (SME) perform regular production times with shift systems, others using irregular production systems. The irregular production time in the SME production system is used because of demand fluctuations. This paper shows a quantitative analysis as a part of manufacturing system design to achieve a specific value of OEE for SME with irregular production systems, for individual businesses as well as collective business systems (some companies use the same production facilities for several processes). The results of experiments on several companies are presented, as a basis for determining the technical strategy of achieving OEE values.

  10. Bio-optical characteristics of a red tide induced by Mesodinium rubrum in the Cariaco Basin, Venezuela

    NASA Astrophysics Data System (ADS)

    Guzmán, Laurencia; Varela, Ramón; Muller-Karger, Frank; Lorenzoni, Laura

    2016-08-01

    The bio-optical changes of the water induced by red tides depend on the type of organism present, and the spectral characterization of such changes can provide useful information on the organism, abundance and distribution. Here we present results from the bio-optical characterization of a non-toxic red tide induced by the autotrophic ciliate Mesodinium rubrum. Particle absorption was high [ap(440) = 1.78 m- 1], as compared to measurements done in the same region [ap(440) = 0.09 ± 0.06 m- 1], with detrital components contributing roughly 11% [ad(440) = 0.19 m- 1]. The remainder was attributed to absorption by phytoplankton pigments [aph(440) = 1.60 m- 1]. These aph values were ~ 15 times higher than typical values for these waters. High chlorophyll a concentrations were also measured (52.73 μg L- 1), together with alloxanthin (9.52 μg L- 1) and chlorophyll c (6.25 μg L- 1). This suite of pigment is typical of the algal class Cryptophyceae, from which Mesodinium obtains its chloroplasts. Remote sensing reflectance showed relatively low values [Rrs(440) = 0.0007 sr- 1], as compared to other Rrs values for the region under high bloom conditions [Rrs(440) = 0.0028 sr- 1], with maxima at 388, 484, 520, 596 and 688 nm. Based on the low reflection in the green-yellow, as compared to other red tides, we propose a new band ratio [Rrs(688)/Rrs(564)] to identify blooms of this particular group of organisms.

  11. Anatomical background noise power spectrum in differential phase contrast breast images

    NASA Astrophysics Data System (ADS)

    Garrett, John; Ge, Yongshuai; Li, Ke; Chen, Guang-Hong

    2015-03-01

    In x-ray breast imaging, the anatomical noise background of the breast has a significant impact on the detection of lesions and other features of interest. This anatomical noise is typically characterized by a parameter, β, which describes a power law dependence of anatomical noise on spatial frequency (the shape of the anatomical noise power spectrum). Large values of β have been shown to reduce human detection performance, and in conventional mammography typical values of β are around 3.2. Recently, x-ray differential phase contrast (DPC) and the associated dark field imaging methods have received considerable attention as possible supplements to absorption imaging for breast cancer diagnosis. However, the impact of these additional contrast mechanisms on lesion detection is not yet well understood. In order to better understand the utility of these new methods, we measured the β indices for absorption, DPC, and dark field images in 15 cadaver breast specimens using a benchtop DPC imaging system. We found that the measured β value for absorption was consistent with the literature for mammographic acquisitions (β = 3.61±0.49), but that both DPC and dark field images had much lower values of β (β = 2.54±0.75 for DPC and β = 1.44±0.49 for dark field). In addition, visual inspection showed greatly reduced anatomical background in both DPC and dark field images. These promising results suggest that DPC and dark field imaging may help provide improved lesion detection in breast imaging, particularly for those patients with dense breasts, in whom anatomical noise is a major limiting factor in identifying malignancies.

  12. Volcanic Seismicity - The Power of the b-value

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Roberts, N.; Bell, A. F.

    2016-12-01

    The Gutenberg-Richter `b-value' is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. It is `well known' that the b-value tends to be high or very high for volcanic earthquake populations relative to b = 1 for those of tectonic earthquakes, and that b varies significantly with time during periods of unrest. Subject to suitable calibration the b-value also allows us to quantify and characterise earthquake distributions of both ancient and currently-active populations, as a measure of the frequency-size distribution of source rupture area or length. Using a new iterative sampling method (Roberts et al. 2016), we examine data from the El Hierro seismic catalogue during a period of unrest in 2011-2013, and quantify the resulting uncertainties. The results demonstrate commonly-applied methods of assessing uncertainty in b-value significantly underestimate the total uncertainty, particularly when b is high. They also show clear multi-modal behaviour in the evolution of the b-value. Individual modes are relatively stable in time, but the most probable b-value intermittently switches between modes, one of which is similar to that of tectonic seismicity, and some are genuinely higher within the total error. A key benefit of this approach is that it is able to resolve different b-values associated with contemporaneous processes, even in the case where some generate high rates of events for short durations and others low rates for longer durations. These characteristics that are typical for many volcanic processes. Secondly, we use a range field observations from the exhumed extinct magma chamber on the Isle of Rum, NW Scotland, to infer an equivalent a b-value for the `frozen' fracture system that would have been active at the time of volcanism 65Ma ago. Using measurements from millimetre-scale fractures to lineation's on satellite imagery over 100m in length, we estimate b=1.8, significantly greater than the typical tectonic value, and in line with the present-day observations at El Hierro and other volcanic systems.

  13. Measuring thermal conductivity of polystyrene nanowires using the dual-cantilever technique.

    PubMed

    Canetta, Carlo; Guo, Samuel; Narayanaswamy, Arvind

    2014-10-01

    Thermal conductance measurements are performed on individual polystyrene nanowires using a novel measurement technique in which the wires are suspended between two bi-material microcantilever sensors. The nanowires are fabricated via electrospinning process. Thermal conductivity of the nanowire samples is found to be between 6.6 and 14.4 W m(-1) K(-1) depending on sample, a significant increase above typical bulk conductivity values for polystyrene. The high strain rates characteristic of electrospinning are believed to lead to alignment of molecular polymer chains, and hence the increase in thermal conductivity, along the axis of the nanowire.

  14. Data Mining for Double Stars in Astrometric Catalogs

    NASA Astrophysics Data System (ADS)

    Wycoff, Gary L.; Mason, Brian D.; Urban, Sean E.

    2006-07-01

    The US Naval Observatory has mined over 140 astrometric catalogs, including the Astrographic Catalogue and the Two Micron All Sky Survey, for measures of double stars. This resulted in 114,218 new measures of 47,007 different systems spanning 110 years; these are now included in the Washington Double Star catalog (WDS). This is the single largest data set ever added to the WDS. The measures are typically of wider pairs, most between 4" and 30" thus, their value in aiding orbit determination is limited. However, they have proven invaluable in the verification of systems and the determination of rectilinear motions of systems.

  15. Q branches of the nu7 fundamental of ethane (C2H6) Integrated intensity measurements for atmospheric measurement applications

    NASA Technical Reports Server (NTRS)

    Rinsland, C. P.; Harvey, G. A.; Levine, J. S.; Smith, M. A. H.; Malathy Devi, V.; Thakur, K. B.

    1986-01-01

    Laboratory spectra covering the nu7 band of ethane (C2H6) have been recorded, and measurements of integrated intensities of selected Q branches from these spectra are reported. The method by which the spectra were obtained is described, and a typical spectrum covering the PQ3 branch at 2976.8/cm is shown along with a plot of equivalent width vs. optical density for this branch. The values of the integrated intensities reported for each branch are the means of five different optical densities.

  16. The oxygen-18 isotope approach for measuring aquatic metabolism in high-productivity waters

    USGS Publications Warehouse

    Tobias, C.R.; Böhlke, J.K.; Harvey, J.W.

    2007-01-01

    We examined the utility of ??18O2 measurements in estimating gross primary production (P), community respiration (R), and net metabolism (P:R) through diel cycles in a productive agricultural stream located in the midwestern U.S.A. Large diel swings in O2 (??200 ??mol L-1) were accompanied by large diel variation in ??18O2 (??10???). Simultaneous gas transfer measurements and laboratory-derived isotopic fractionation factors for O2 during respiration (??r) were used in conjunction with the diel monitoring of O2 and ??18O2 to calculate P, R, and P:R using three independent isotope-based methods. These estimates were compared to each other and against the traditional "open-channel diel O2-change" technique that lacked ??18O2. A principal advantage of the ??18O2 measurements was quantification of diel variation in R, which increased by up to 30% during the day, and the diel pattern in R was variable and not necessarily predictable from assumed temperature effects on R. The P, R, and P:R estimates calculated using the isotope-based approaches showed high sensitivity to the assumed system fractionation factor (??r). The optimum modeled ??r values (0.986-0.989) were roughly consistent with the laboratory-derived values, but larger (i.e., less fractionation) than ??r values typically reported for enzyme-limited respiration in open water environments. Because of large diel variation in O2, P:R could not be estimated by directly applying the typical steady-state solution to the O2 and 18O-O2 mass balance equations in the absence of gas transfer data. Instead, our results indicate that a modified steady-state solution (the daily mean value approach) could be used with time-averaged O2 and ??18O2 measurements to calculate P:R independent of gas transfer. This approach was applicable under specifically defined, net heterotrophic conditions. The diel cycle of increasing daytime R and decreasing nighttime R was only partially explained by temperature variation, but could be consistent with the diel production/consumption of labile dissolved organic carbon from photosynthesis. ?? 2007, by the American Society of Limnology and Oceanography, Inc.

  17. The Effect of Fuel Quality on Carbon Dioxide and Nitrogen Oxide Emissions, While Burning Biomass and RDF

    NASA Astrophysics Data System (ADS)

    Kalnacs, J.; Bendere, R.; Murasovs, A.; Arina, D.; Antipovs, A.; Kalnacs, A.; Sprince, L.

    2018-02-01

    The article analyses the variations in carbon dioxide emission factor depending on parameters characterising biomass and RDF (refuse-derived fuel). The influence of moisture, ash content, heat of combustion, carbon and nitrogen content on the amount of emission factors has been reviewed, by determining their average values. The options for the improvement of the fuel to result in reduced emissions of carbon dioxide and nitrogen oxide have been analysed. Systematic measurements of biomass parameters have been performed, by determining their average values, seasonal limits of variations in these parameters and their mutual relations. Typical average values of RDF parameters and limits of variations have been determined.

  18. Spectra of conditionalization and typicality in the multiverse

    NASA Astrophysics Data System (ADS)

    Azhar, Feraz

    2016-02-01

    An approach to testing theories describing a multiverse, that has gained interest of late, involves comparing theory-generated probability distributions over observables with their experimentally measured values. It is likely that such distributions, were we indeed able to calculate them unambiguously, will assign low probabilities to any such experimental measurements. An alternative to thereby rejecting these theories, is to conditionalize the distributions involved by restricting attention to domains of the multiverse in which we might arise. In order to elicit a crisp prediction, however, one needs to make a further assumption about how typical we are of the chosen domains. In this paper, we investigate interactions between the spectra of available assumptions regarding both conditionalization and typicality, and draw out the effects of these interactions in a concrete setting; namely, on predictions of the total number of species that contribute significantly to dark matter. In particular, for each conditionalization scheme studied, we analyze how correlations between densities of different dark matter species affect the prediction, and explicate the effects of assumptions regarding typicality. We find that the effects of correlations can depend on the conditionalization scheme, and that in each case atypicality can significantly change the prediction. In doing so, we demonstrate the existence of overlaps in the predictions of different "frameworks" consisting of conjunctions of theory, conditionalization scheme and typicality assumption. This conclusion highlights the acute challenges involved in using such tests to identify a preferred framework that aims to describe our observational situation in a multiverse.

  19. Three wave mixing test of hyperelasticity in highly nonlinear solids: sedimentary rocks.

    PubMed

    D'Angelo, R M; Winkler, K W; Johnson, D L

    2008-02-01

    Measurements of three-wave mixing amplitudes on solids whose third order elastic constants have also been measured by means of the elasto-acoustic effect are reported. Because attenuation and diffraction are important aspects of the measurement technique results are analyzed using a frequency domain version of the KZK equation, modified to accommodate an arbitrary frequency dependence to the attenuation. It is found that the value of beta so deduced for poly(methylmethacrylate) (PMMA) agrees quite well with that predicted from the stress-dependent sound speed measurements, establishing that PMMA may be considered a hyperelastic solid, in this context. The beta values of sedimentary rocks, though they are typically two orders of magnitude larger than, e.g., PMMA's, are still a factor of 3-10 less than those predicted from the elasto-acoustic effect. Moreover, these samples exhibit significant heterogeneity on a centimeter scale, which heterogeneity is not apparent from a measurement of the position dependent sound speed.

  20. The rise and fall of redundancy in decoherence and quantum Darwinism

    NASA Astrophysics Data System (ADS)

    Jess Riedel, C.; Zurek, Wojciech H.; Zwolak, Michael

    2012-08-01

    A state selected at random from the Hilbert space of a many-body system is overwhelmingly likely to exhibit highly non-classical correlations. For these typical states, half of the environment must be measured by an observer to determine the state of a given subsystem. The objectivity of classical reality—the fact that multiple observers can agree on the state of a subsystem after measuring just a small fraction of its environment—implies that the correlations found in nature between macroscopic systems and their environments are exceptional. Building on previous studies of quantum Darwinism showing that highly redundant branching states are produced ubiquitously during pure decoherence, we examine the conditions needed for the creation of branching states and study their demise through many-body interactions. We show that even constrained dynamics can suppress redundancy to the values typical of random states on relaxation timescales, and prove that these results hold exactly in the thermodynamic limit.

  1. SU-G-IeP3-01: Better Kerma-Area-Product (KAP) Estimation Using the System Parameters in Radiography and Fluoroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, D; MacDougall, R

    2016-06-15

    Purpose: Accurate values for Kerma-Area-Product (KAP) are needed for patient dosimetry and quality control for exams utilizing radiographic and/or fluoroscopic imaging. The KAP measured using a typical direct KAP meter built with parallel-plate transmission ionization chamber is not precise and depends on the energy spectrum of diagnostic x-rays. This study compared the accuracy and reproducibility of KAP derived from system parameters with values measured with a direct KAP meter. Methods: IEC tolerance for displayed KAP is specified up to ± 35% above 2.5 Gy-cm{sup 2} and manufacturer’s specifications are typically ± 25%. KAP values from the direct KAP meter driftsmore » with time leading to replacement or re-calibration. More precise and consistent KAP is achievable utilizing a database of known radiation output for various system parameters. The integrated KAP meter was removed from a radiography system. A total of 48 measurements of air kerma were acquired at x-ray tube potential from 40 to 150 kVp with 10 kVp increment using ion chamber type external dosimeter at free-in-air geometry for four different types of filter combinations following the manufacturer’s service procedure. These data were used to create updated correction factors that determine air kerma computationally for given system parameters. Results of calculated KAP were evaluated against results using a calibrated ion chamber based dosimeter and a computed radiography imaging plate to measure x-ray field size. Results: The accuracy of calculated KAP from the system parameters was better within 4% deviation in all diagnostic x-ray tube potentials tested from 50 to 140 kVp. In contrast, deviations of up to 25% were measured from KAP displayed from the direct KAP meter. Conclusion: The “calculated KAP” approach provides the nominal advantage of improved accuracy and precision of displayed KAP as well as reduced cost of calibrating or replacing integrated KAP meters.« less

  2. Specific Impulses Losses in Solid Propellant Rockets

    DTIC Science & Technology

    1974-12-17

    binder -- polyvinyl, polyurethane, or polybutadiene) markedly increases performance. Aluminum is the most widely used metal since its energy properties...temperature is also used. -5- The specific impulse values calculated for a typical propellant with 16.4% aluminum are as follows: (p0 70 atm. p - 1 atm...Direct Measurement of Combuction Efficiency of Aluminum Analysis of the condensed phase enables the proportion of unburnt aluminum to be determined

  3. Outcome-Focused Market Intelligence: Extracting Better Value and Effectiveness from Strategic Sourcing

    DTIC Science & Technology

    2013-04-01

    disseminating information are not systematically taught or developed in the government’s acquisition workforce. However, a study of 30 large firms ...to keep themselves abreast of changes in the marketplace, such as technological advances, process improvements, and available sources of supply. The...and performance measurement (Monczka & Petersen, 2008). Firms that develop supply management strategic plans typically set three-to-five year

  4. High value of ecological information for river connectivity restoration

    USGS Publications Warehouse

    Sethi, Suresh; O'Hanley, Jesse R.; Gerken, Jonathon; Ashline, Joshua; Bradley, Catherine

    2017-01-01

    ContextEfficient restoration of longitudinal river connectivity relies on barrier mitigation prioritization tools that incorporate stream network spatial structure to maximize ecological benefits given limited resources. Typically, ecological benefits of barrier mitigation are measured using proxies such as the amount of accessible riverine habitat.ObjectivesWe developed an optimization approach for barrier mitigation planning which directly incorporates the ecology of managed taxa, and applied it to an urbanizing salmon-bearing watershed in Alaska.MethodsA novel river connectivity metric that exploits information on the distribution and movement of managed taxon was embedded into a barrier prioritization framework to identify optimal mitigation actions given limited restoration budgets. The value of ecological information on managed taxa was estimated by comparing costs to achieve restoration targets across alternative barrier prioritization approaches.ResultsBarrier mitigation solutions informed by life history information outperformed those using only river connectivity proxies, demonstrating high value of ecological information for watershed restoration. In our study area, information on salmon ecology was typically valued at 0.8–1.2 M USD in costs savings to achieve a given benefit level relative to solutions derived only from stream network information, equating to 16–28% of the restoration budget.ConclusionsInvesting in ecological studies may achieve win–win outcomes of improved understanding of aquatic ecology and greater watershed restoration efficiency.

  5. Experimental Constraints On Transparency of The 1052;1040;rtian Atmosphere Out of Dust Storm

    NASA Astrophysics Data System (ADS)

    Korablev, O.; Moroz, V. I.; Rodin, A. V.

    In the absence of a dust storm so-called permanent dust haze with = 0.2 in the atmo- sphere of Mars determines its thermal structure, as it has been shown by Gierasch and Goody [1972 JAS 29, 400] and is confirmed by modern Mars GCMs that include dust cycle. Dust loading varies substantially with the season and geographic location, and only the data of mapping instruments are adequate to characterize it. Presently, these are the data of thermal IR instruments, benefiting from being insensitive to condensa- tional clouds: TES/MGS and IRTM/Viking. In calm atmospheric conditions (aphelion season) a typical value of 9-µm optical depth 9 of 0.05-0.15 is observed by these instruments [Smith et al. 2000, 2001 JGR 105, 9539; JGR 106, 23929; Martin and Richardson 1993 JGR 98, 10941]. In order to quantify the typical optical depth of the permanent dust haze, we will discuss, among others, the following two questions: 1) How to agree the above values and reliable measurements from the surface (VL, Pathfinder) which give the typical optical depth (out of dust storms) of = 0.5 from one side, and some ground-based observations (in UV-visible range) that frequently reveal < 0.02 on the other side. 2) What is the relationship between 9 and the visi- ble optical depth? Comparison of IRTM and VL measurements (the only simultaneous observations available so far) suggest vis/9 = 2.5, that contradict to vis/9 = 0.9 that follow from IRIS/Mariner 9 mineralogy model, which is confirmed by recent re- analysis of IRIS data.

  6. In-situ measurements of nitric oxide in the high latitude upper stratosphere

    NASA Technical Reports Server (NTRS)

    Horvath, J. J.; Frederick, J. E.

    1985-01-01

    The vertical profiles of nitric acid were measured over Poker Flat, Alaska, in August 1984 and January and February 1985 using a rocket-launched parachute-deployed chemiluminescence sensor. Results for the altitude range 35-45 km indicate a large seasonal variation, with wintertime mixing ratios being a factor of two above summer values. The winter profiles contain sharp positive vertical gradients persisting through the highest altitudes observed. Above the stratopause, the mixing ratio observed in February increases rapidly and between 52 and 53 km reaches 148.9 ppbv, an order of magnitude greater than typical mid-latitude values measured with this instrument. Such behavior is consistent with the idea that nitric oxide produced at greater altitudes reaches the high-latitude upper stratosphere or lower mesosphere in winter. The results support the existence of a vertical coupling between diverse regions of the atmosphere in the high-latitude winter.

  7. Performance of a Brayton power system with a space type radiator

    NASA Technical Reports Server (NTRS)

    Nussle, R. C.; Prok, G. M.; Fenn, D. B.

    1974-01-01

    Test results of an experimental investigation to measure Brayton engine performance while operating at the sink temperatures of a typical low earth orbit are presented. The results indicate that the radiator area was slightly oversized. The steady state and transient responses of the power system to the sink temperatures in orbit were measured. During the orbital operation, the engine did not reach the steady state operation of either sun or shade conditions. The alternator power variation during orbit was + or - 4 percent from its mean value of 9.3 kilowatts.

  8. Near-field entrainment in black smoker plumes

    NASA Astrophysics Data System (ADS)

    Smith, J. E.; Germanovich, L. N.; Lowell, R. P.

    2013-12-01

    In this work, we study the entrainment rate of the ambient fluid into a plume in the extreme conditions of hydrothermal venting at ocean floor depths that would be difficult to reproduce in the laboratory. Specifically, we investigate the flow regime in the lower parts of three black smoker plumes in the Main Endeavour Field on the Juan de Fuca Ridge discharging at temperatures of 249°C, 333°C, and 336°C and a pressure of 21 MPa. Such flow conditions are typical for ocean floor hydrothermal venting but would be difficult to reproduce in the laboratory. The centerline temperature was measured at several heights in the plume above the orifice. Using a previously developed turbine flow meter, we also measured the mean flow velocity at the orifice. Measurements were conducted during dives 4452 and 4518 on the submersible Alvin. Using these measurements, we obtained a range of 0.064 - 0.068 for values of the entrainment coefficient α, which is assumed constant near the orifice. This is half the value of α ≈ 0.12 - 0.13 that would be expected for plume flow regimes based on the existing laboratory results and field measurements in lower temperature and pressure conditions. In fact, α = 0.064 - 0.068 is even smaller than the value of α ≈ 0.075 characteristic of jet flow regimes and appears to be the lowest reported in the literature. Assuming that the mean value α = 0.066 is typical for hydrothermal venting at ocean floor depths, we then characterized the flow regimes of 63 black smoker plumes located on the Endeavor Segment of the Juan de Fuca Ridge. Work with the obtained data is ongoing, but current results indicate that approximately half of these black smokers are lazy in the sense that their plumes exhibit momentum deficits compared to the pure plume flow that develops as the plume rises. The remaining half produces forced plumes that show the momentum excess compared to the pure plumes. The lower value of the entrainment coefficient has important implications for measurements of mass and heat output at mid-oceanic ridges. For example, determining heat output based on the maximum height of plume rise has become a common method of measuring heat flux produced by hydrothermal circulation at mid-oceanic ridges. The fundamental theory for the rise and spreading of turbulent buoyant plumes suggests that the heat output in this method is proportional to α2 and is, therefore, sensitive to the value of α. The considerably different entrainment rates in lazy and forced black smoker plumes may be important for understanding larvae transport mechanism in the life cycle of macrofauna near hydrothermal vents.

  9. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  10. Average snowcover density values in Eastern Alps mountain

    NASA Astrophysics Data System (ADS)

    Valt, M.; Moro, D.

    2009-04-01

    The Italian Avalanche Warning Services monitor the snow cover characteristics through networks evenly distributed all over the alpine chain. Measurements of snow stratigraphy and density are very frequently performed with sampling rates of 1 -2 times per week. Snow cover density values are used to compute the dimensions of the building roofs as well as to design avalanche barriers. Based on the measured snow densities the Electricity Board can predict the amount of water resources deriving from snow melt in high relieves drainage basins. In this work it was possible to compute characteristic density values of the snow cover in the Eastern Alps using the information contained in the database from the ARPA (Agenzia Regionale Protezione Ambiente)-Centro Valanghe di Arabba, and Ufficio Valanghe- Udine. Among the other things, this database includes 15 years of stratigraphic measurements. More than 6,000 snow stratigraphic logs were analysed, in order to derive typical values as for geographical area, altitude, exposure, snow cover thickness and season. Computed values were compared to those established by the current Italian laws. Eventually, experts identified and evaluated the correlations between the seasonal variations of the average snow density and the variations related to the snowfall rate in the period 1994-2008 in the Eastern Alps mountain range

  11. Particulate Matter Mass Concentration in Residential Prefabricated Buildings Related to Temperature and Moisture

    NASA Astrophysics Data System (ADS)

    Kraus, Michal; Juhásová Šenitková, Ingrid

    2017-10-01

    Building environmental audit and the assessment of indoor air quality (IAQ) in typical residential buildings is necessary process to ensure users’ health and well-being. The paper deals with the concentrations on indoor dust particles (PM10) in the context of hygrothermal microclimate in indoor environment. The indoor temperature, relative humidity and air movement are basic significant factors determining the PM10 concentration [μg/m3]. The experimental measurements in this contribution represent the impact of indoor physical parameters on the concentration of particulate matter mass concentration. The occurrence of dust particles is typical for the almost two-thirds of interiors of the buildings. Other parameters indoor environment, such as air change rate, volume of the room, roughness and porosity of the building material surfaces, static electricity, light ions and others, were set constant and they are not taken into account in this study. The mass concentration of PM10 is measured during summer season in apartment of residential prefabricated building. The values of global temperature [°C] and relative humidity of indoor air [%] are also monitored. The quantity of particulate mass matter is determined gravimetrically by weighing according to CSN EN 12 341 (2014). The obtained results show that the temperature difference of the internal environment does not have a significant effect on the concentration PM10. Vice versa, the difference of relative humidity exhibits a difference of the concentration of dust particles. Higher levels of indoor particulates are observed for low values of relative humidity. The decreasing of relative air humidity about 10% caused 10µg/m3 of PM10 concentration increasing. The hygienic limit value of PM10 concentration is not exceeded at any point of experimental measurement.

  12. Seasonal variations measured by TDR and GPR on an anthropogenic sandy soil and the implications for utility detection

    NASA Astrophysics Data System (ADS)

    Curioni, Giulio; Chapman, David N.; Metje, Nicole

    2017-06-01

    The electromagnetic (EM) soil properties are dynamic variables that can change considerably over time, and they fundamentally affect the performance of Ground Penetrating Radar (GPR). However, long-term field studies are remarkably rare and records of the EM soil properties and their seasonal variation are largely absent from the literature. This research explores the extent of the seasonal variation of the apparent permittivity (Ka) and bulk electrical conductivity (BEC) measured by Time Domain Reflectometry (TDR) and their impact on GPR results, with a particularly important application to utility detection. A bespoke TDR field monitoring station was specifically developed and installed in an anthropogenic sandy soil in the UK for 22 months. The relationship between the temporal variation of the EM soil properties and GPR performance has been qualitatively assessed, highlighting notably degradation of the GPR images during wet periods and a few days after significant rainfall events following dry periods. Significantly, it was shown that by assuming arbitrary average values (i.e. not extreme values) of Ka and BEC which do not often reflect the typical conditions of the soil, it can lead to significant inaccuracies in the estimation of the depth of buried targets, with errors potentially up to approximately 30% even over a depth of 0.50 m (where GPR is expected to be most accurate). It is therefore recommended to measure or assess the soil conditions during GPR surveys, and if this is not possible to use typical wet and dry Ka values reported in the literature for the soil expected at the site, to improve confidence in estimations of target depths.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubaud, Aude A.; Schroeder, David J.; Ingram, Brian J.

    The thermal expansion (TE) coefficients of the lithium-stable lithium-ion conducting garnet lithium lanthanum zirconium oxide (LLZ) and the effect of aluminum substitution were measured from room temperature up to 700 °C by a synchrotron-based X-ray diffraction. The typical TE value measured for the most reported composition (LLZ doped with 0.3 wt.% or 0.093 mol% aluminum) was 15.498 × 10-6 K-1, which is approximately twice the value reported for other garnet-type structures. As the Al(III) concentration has been observed to strongly affect the structure observed and the ionic conductivity, we also assessed its role on thermal expansion and noted only amore » small variation with increasing dopant concentration. The materials implications for using LLZ in a solid state battery are discussed.« less

  14. Outpatient imaging center valuations: do you need a fair-market value analysis?

    PubMed

    Koonsman, G S

    2001-01-01

    Typically, outpatient diagnostic imaging centers are formed as partnerships between radiologists, radiologists and hospitals, and/or radiologists and diagnostic imaging center management companies. As a result of these partnership structures, the question of equity valuation frequently arises. It is not only important to understand when an independent valuation would be required, but also what "type" of valuation needs to be performed. The type of valuation may vary based upon the use of the valuation. In partnerships that involve hospitals and physicians, the federal anti-kickback statutes (fraud and abuse laws) require that all transactions between referring physicians and hospitals be consummated at fair-market value. In addition, tax-exempt hospitals that enter into partnerships with physicians are required to enter into those transactions at fair-market value or risk losing their tax-exempt status. Fair-market value is also typically the standard of value that all partnerships strive to conduct equity transactions with shareholders. Qualifications required by those who perform independent fair-market value opinions include: Proper business valuation training and focus on valuations as a primary business Focus on the healthcare industry and specifically on the valuation of diagnostic imaging centers In order to perform a reasonable business valuation analysis, the appraiser must have access to a significant amount of financial, operational and legal information. The analyst must be able to understand the history of the imaging center as well as the projected future of the center. Ultimately, a valuation is a measurement of the estimated future cash flows of the center--risk adjusted--in order to quantify the present value of those cash flows.

  15. Mars hemispherical albedo map: absolute value and interannual variability inferred from OMEGA data.

    NASA Astrophysics Data System (ADS)

    Vincendon, M.; Audouard, J.; Langevin, Y.; Poulet, F.; Bellucci, G.; Bibring, J.-P.; Gondet, B.

    2012-04-01

    The surface reflectance integrated over all directions and solar wavelengths ("hemispherical albedo") controls the radiative budget at the surface of Mars, and hence its climate. Reference albedo maps are usually derived from nadir observation of surface reflectance through clear atmospheric conditions. However, the atmosphere of Mars is permanently loaded with a significant amount of aerosols (typical visible optical depths of 0.5 under clear atmospheric conditions), which impacts the evaluation of "aerosol free" surface reflectances from remote sensing data. Moreover, the Martian surface is usually assumed to be Lambertian, both for simplicity and due to the lack of robust constraints about its bidirectional properties. We used OMEGA visible and near-IR measurements, with an appropriate UV extrapolation, to calculate as a function of space and time the hemispherical surface albedo of Mars. The contribution of aerosols is removed using a radiative transfer model and recent aerosols properties. Uncertainties associated with this procedure are calculated. The aerosols correction increases the bright/dark surfaces contrast. Typical, mean bidirectional reflectance properties of the martian surface are estimated using MER surface measurements and CRISM remote "EPF" observations. From these constraints, we have derived a typical relationship that makes it possible to convert single nadir measurements of the reflectance into hemispherical albedo. Accounting for the BRDF of the martian surface typically modify by ± 15% the derived albedo, depending on solar zenith angles. We will present our methods and preliminary results regarding seasonal and interannual variations of the surface albedo of Mars during years 2004-2011.

  16. Improved measurements of turbulence in the hot gaseous atmospheres of nearby giant elliptical galaxies

    DOE PAGES

    Ogorzalek, A.; Zhuravleva, I.; Allen, S. W.; ...

    2017-08-12

    Here, we present significantly improved measurements of turbulent velocities in the hot gaseous haloes of nearby giant elliptical galaxies. Using deep XMM–NewtonReflection Grating Spectrometer ( RGS) observations and a combination of resonance scattering and direct line broadening methods, we obtain well bounded constraints for 13 galaxies. Assuming that the turbulence is isotropic, we obtain a best-fitting mean 1D turbulent velocity of 110 km s -1. This implies a typical 3D Mach number ~0.45 and a typical non-thermal pressure contribution of ~6 per cent in the cores of nearby massive galaxies. The intrinsic scatter around these values is modest – consistentmore » with zero, albeit with large statistical uncertainty – hinting at a common and quasi-continuous mechanism sourcing the velocity structure in these objects. Using conservative estimates of the spatial scales associated with the observed turbulent motions, we find that turbulent heating can be sufficient to offset radiative cooling in the inner regions of these galaxies (<10 kpc, typically 2–3 kpc). The full potential of our analysis methods will be enabled by future X-ray micro-calorimeter observations.« less

  17. The VALS: A new tool to measure people's general valued attributes of landscapes.

    PubMed

    Kendal, Dave; Ford, Rebecca M; Anderson, Nerida M; Farrar, Alison

    2015-11-01

    Research on values for natural areas has largely focussed on theoretical concerns such as distinguishing different kinds of values held by people. However practice, policymaking, planning and management is typically focused on more tangible valued attributes of the landscape such as biodiversity and recreation infrastructure that can be manipulated by management actions. There is a need for valid psychometric measures of such values that are suited to informing land management policies. A Valued Attributes of Landscape Scale (VALS) was developed, derived from a document analysis of values expressed in public land policy documents. The validity of the VALS was tested in an online survey comparing values across one of three randomly presented landscape contexts in Victoria, Australia: all publicly managed natural land, coastal areas, and large urban parks. A purposive snowball sample was used to recruit participants with a range of views and professional experience with land management, including members of the urban public. Factor analysis of responses (n = 646) separated concepts relating to natural attributes, social functions, the experience of being in natural areas, cultural attributes and productive uses. Relative importance of valued attribute factors was similar across all landscape contexts, although there were small but significant differences in the way people valued social functions (higher in urban parks) and productive uses (lower in urban parks). We conclude that the concept of valued attributes is useful for linking theoretical understandings of people's environmental values to the way values are considered by land managers, and that these attributes can be measured using the VALS instrument to produce data that should be useful for the policy and planning of natural resources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Health care economic analyses and value-based medicine.

    PubMed

    Brown, Melissa M; Brown, Gary C; Sharma, Sanjay; Landy, Jennifer

    2003-01-01

    Health care economic analyses are becoming increasingly important in the evaluation of health care interventions, including many within ophthalmology. Encompassed with the realm of health care economic studies are cost-benefit analysis, cost-effectiveness analysis, cost-minimization analysis, and cost-utility analysis. Cost-utility analysis is the most sophisticated form of economic analysis and typically incorporates utility values. Utility values measure the preference for a health state and range from 0.0 (death) to 1.0 (perfect health). When the change in utility measures conferred by a health care intervention is multiplied by the duration of the benefit, the number of quality-adjusted life-years (QALYs) gained from the intervention is ascertained. This methodology incorporates both the improvement in quality of life and/or length of life, or the value, occurring as a result of the intervention. This improvement in value can then be amalgamated with discounted costs to yield expenditures per quality-adjusted life-year ($/QALY) gained. $/QALY gained is a measure that allows a comparison of the patient-perceived value of virtually all health care interventions for the dollars expended. A review of the literature on health care economic analyses, with particular emphasis on cost-utility analysis, is included in the present review. It is anticipated that cost-utility analysis will play a major role in health care within the coming decade.

  19. Estimating the uncertainty in thermochemical calculations for oxygen-hydrogen combustors

    NASA Astrophysics Data System (ADS)

    Sims, Joseph David

    The thermochemistry program CEA2 was combined with the statistical thermodynamics program PAC99 in a Monte Carlo simulation to determine the uncertainty in several CEA2 output variables due to uncertainty in thermodynamic reference values for the reactant and combustion species. In all, six typical performance parameters were examined, along with the required intermediate calculations (five gas properties and eight stoichiometric coefficients), for three hydrogen-oxygen combustors: a main combustor, an oxidizer preburner and a fuel preburner. The three combustors were analyzed in two different modes: design mode, where, for the first time, the uncertainty in thermodynamic reference values---taken from the literature---was considered (inputs to CEA2 were specified and so had no uncertainty); and data reduction mode, where inputs to CEA2 did have uncertainty. The inputs to CEA2 were contrived experimental measurements that were intended to represent the typical combustor testing facility. In design mode, uncertainties in the performance parameters were on the order of 0.1% for the main combustor, on the order of 0.05% for the oxidizer preburner and on the order of 0.01% for the fuel preburner. Thermodynamic reference values for H2O were the dominant sources of uncertainty, as was the assigned enthalpy for liquid oxygen. In data reduction mode, uncertainties in performance parameters increased significantly as a result of the uncertainties in experimental measurements compared to uncertainties in thermodynamic reference values. Main combustor and fuel preburner theoretical performance values had uncertainties of about 0.5%, while the oxidizer preburner had nearly 2%. Associated experimentally-determined performance values for all three combustors were 3% to 4%. The dominant sources of uncertainty in this mode were the propellant flowrates. These results only apply to hydrogen-oxygen combustors and should not be generalized to every propellant combination. Species for a hydrogen-oxygen system are relatively simple, thereby resulting in low thermodynamic reference value uncertainties. Hydrocarbon combustors, solid rocket motors and hybrid rocket motors have combustion gases containing complex molecules that will likely have thermodynamic reference values with large uncertainties. Thus, every chemical system should be analyzed in a similar manner as that shown in this work.

  20. Assessment of natural radioactivity and gamma-ray dose in monazite rich black Sand Beach of Penang Island, Malaysia.

    PubMed

    Shuaibu, Hauwau Kulu; Khandaker, Mayeen Uddin; Alrefae, Tareq; Bradley, D A

    2017-06-15

    Activity concentrations of primordial radionuclides in sand samples collected from the coastal beaches surrounding Penang Island have been measured using conventional γ-ray spectrometry, while in-situ γ-ray doses have been measured through use of a portable radiation survey meter. The mean activity concentrations for 226 Ra, 232 Th and 40 K at different locations were found to be less than the world average values, while the Miami Bay values for 226 Ra and 232 Th were found to be greater, at 1023±47 and 2086±96Bqkg ̶ 1 respectively. The main contributor to radionuclide enrichment in Miami Bay is the presence of monazite-rich black sands. The measured data were compared against literature values and also recommended limits set by the relevant international bodies. With the exception of Miami Bay, considered an elevated background radiation area that would benefit from regular monitoring, Penang island beach sands typically pose no significant radiological risk to the local populace and tourists visiting the leisure beaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. An Improved Method of Predicting Extinction Coefficients for the Determination of Protein Concentration.

    PubMed

    Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W

    2017-01-01

    Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.

  2. The variance of the locally measured Hubble parameter explained with different estimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk

    We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less

  3. A Framework for Establishing Standard Reference Scale of Texture by Multivariate Statistical Analysis Based on Instrumental Measurement and Sensory Evaluation.

    PubMed

    Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye

    2016-01-13

    A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.

  4. Wolf Attack Probability: A Theoretical Security Measure in Biometric Authentication Systems

    NASA Astrophysics Data System (ADS)

    Une, Masashi; Otsuka, Akira; Imai, Hideki

    This paper will propose a wolf attack probability (WAP) as a new measure for evaluating security of biometric authentication systems. The wolf attack is an attempt to impersonate a victim by feeding “wolves” into the system to be attacked. The “wolf” means an input value which can be falsely accepted as a match with multiple templates. WAP is defined as a maximum success probability of the wolf attack with one wolf sample. In this paper, we give a rigorous definition of the new security measure which gives strength estimation of an individual biometric authentication system against impersonation attacks. We show that if one reestimates using our WAP measure, a typical fingerprint algorithm turns out to be much weaker than theoretically estimated by Ratha et al. Moreover, we apply the wolf attack to a finger-vein-pattern based algorithm. Surprisingly, we show that there exists an extremely strong wolf which falsely matches all templates for any threshold value.

  5. Low-intensity calibration source for optical imaging systems

    NASA Astrophysics Data System (ADS)

    Holdsworth, David W.

    2017-03-01

    Laboratory optical imaging systems for fluorescence and bioluminescence imaging have become widely available for research applications. These systems use an ultra-sensitive CCD camera to produce quantitative measurements of very low light intensity, detecting signals from small-animal models labeled with optical fluorophores or luminescent emitters. Commercially available systems typically provide quantitative measurements of light output, in units of radiance (photons s-1 cm-2 SR-1) or intensity (photons s-1 cm-2). One limitation to current systems is that there is often no provision for routine quality assurance and performance evaluation. We describe such a quality assurance system, based on an LED-illuminated thin-film transistor (TFT) liquid-crystal display module. The light intensity is controlled by pulse-width modulation of the backlight, producing radiance values ranging from 1.8 x 106 photons s-1 cm-2 SR-1 to 4.2 x 1013 photons s-1 cm-2 SR-1. The lowest light intensity values are produced by very short backlight pulses (i.e. approximately 10 μs), repeated every 300 s. This very low duty cycle is appropriate for laboratory optical imaging systems, which typically operate with long-duration exposures (up to 5 minutes). The low-intensity light source provides a stable, traceable radiance standard that can be used for routine quality assurance of laboratory optical imaging systems.

  6. Correlational study on atmospheric concentrations of fine particulate matter and children cough variant asthma.

    PubMed

    Zhang, Y-X; Liu, Y; Xue, Y; Yang, L-Y; Song, G-D; Zhao, L

    2016-06-01

    We explored the relationship between atmospheric concentrations of fine particulate matter and children cough variant asthma. 48 children all diagnosed with cough variant asthma were placed in the cough asthma group while 50 children suffering from typical asthma were place in typical asthma group. We also had 50 cases of chronic pneumonia (the pneumonia group) and 50 cases of healthy children (the control group). We calculated the average PM 2.5 and temperature values during spring, summer, autumn and winter and monitored serum lymphocyte ratio, CD4+/CD8+T, immunoglobulin IgE, ventilatory index and high-sensitivity C-reactive protein (hs-CRP) levels. Our results showed that PM 2.5 values in spring and winter were remarkably higher compared to other seasons. Correlated analysis demonstrated that the onset of cough asthma group was happening in spring. The onset of typical asthma group happened mostly in winter, followed by spring. We established a positive correlation between the onset of asthma of cough asthma group and PM 2.5 value (r = 0.623, p = 0.017), and there was also a positive correlation between the onset of asthma of typical asthma group and PM 2.5 value (r = 0.714, p = 0.015). Our results showed that lymphocyte ratio and IgE level in the cough asthma group and the typical asthma group were significantly higher. CD4+/CD8+T was significantly lower in the cough asthma group and the typical asthma group. The hs-CRP level in cough asthma, typical asthma and pneumonia groups were significantly higher than that of the control group. The FEV1/predicted value, FEV1/FVC and MMEF/predicted value in the cough asthma group and the typical asthma group were significantly lower than those in other groups, however when comparing between two groups respectively, the difference was not statistically significant. Our findings showed that PM2.5 was related to the onset of children cough variant asthma. PM2.5 reduced immune regulation and ventilatory function.

  7. Cathode fall measurement in a dielectric barrier discharge in helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Yanpeng; Zheng, Bin; Liu, Yaoge

    2013-11-15

    A method based on the “zero-length voltage” extrapolation is proposed to measure cathode fall in a dielectric barrier discharge. Starting, stable, and discharge-maintaining voltages were measured to obtain the extrapolation zero-length voltage. Under our experimental conditions, the “zero-length voltage” gave a cathode fall of about 185 V. Based on the known thickness of the cathode fall region, the spatial distribution of the electric field strength in dielectric barrier discharge in atmospheric helium is determined. The strong cathode fall with a maximum field value of approximately 9.25 kV/cm was typical for the glow mode of the discharge.

  8. Physical data measurements and mathematical modelling of simple gas bubble experiments in glass melts

    NASA Technical Reports Server (NTRS)

    Weinberg, Michael C.

    1986-01-01

    In this work consideration is given to the problem of the extraction of physical data information from gas bubble dissolution and growth measurements. The discussion is limited to the analysis of the simplest experimental systems consisting of a single, one component gas bubble in a glassmelt. It is observed that if the glassmelt is highly under- (super-) saturated, then surface tension effects may be ignored, simplifying the task of extracting gas diffusivity values from the measurements. If, in addition, the bubble rise velocity is very small (or very large) the ease of obtaining physical property data is enhanced. Illustrations are given for typical cases.

  9. Determination of perpendicular magnetic anisotropy based on the magnetic droplet nucleation

    NASA Astrophysics Data System (ADS)

    Nishimura, Tomoe; Kim, Duck-Ho; Okuno, Takaya; Hirata, Yuushou; Futakawa, Yasuhiro; Yoshikawa, Hiroki; Kim, Sanghoon; Tsukamoto, Arata; Shiota, Yoichi; Moriyama, Takahiro; Ono, Teruo

    2018-05-01

    We propose an alternative method of determining the magnetic anisotropy field μ0 H K in ferro-/ferrimagnets. On the basis of the droplet nucleation model, there exists linearity between domain-wall (DW) energy density and in-plane magnetic field. We find that the slope is simply represented by μ0 H K and Dzyaloshinskii–Moriya interaction (DMI). By measuring the in-plane magnetic field dependence of the coercivity field, closely corresponding to the DW energy density, a robust value for μ0 H K can be quantified. This robust value can be used to determine μ0 H K over a wide range of values, overcoming the limitations caused by the small strength of the external magnetic field typically used in experiments.

  10. A vibration model for centrifugal contactors

    NASA Astrophysics Data System (ADS)

    Leonard, R. A.; Wasserman, M. O.; Wygmans, D. G.

    1992-11-01

    Using the transfer matrix method, we created the Excel worksheet 'Beam' for analyzing vibrations in centrifugal contactors. With this worksheet, a user can calculate the first natural frequency of the motor/rotor system for a centrifugal contactor. We determined a typical value for the bearing stiffness (k(sub B)) of a motor after measuring the k(sub B) value for three different motors. The k(sub B) value is an important parameter in this model, but it is not normally available for motors. The assumptions that we made in creating the Beam worksheet were verified by comparing the calculated results with those from a VAX computer program, BEAM IV. The Beam worksheet was applied to several contactor designs for which we have experimental data and found to work well.

  11. SU-F-T-08: Brachytherapy Film Dosimetry in a Water Phantom for a Ring and Tandem HDR Applicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B; Grelewicz, Z; Kang, Z

    2016-06-15

    Purpose: The feasibility of dose measurement using new generation EBT3 film was explored in a water phantom for a ring and tandem HDR applicator for measurements tracking mucosal dose during cervical brachytherapy. Methods: An experimental fixture was assembled to position the applicator in a water phantom. Prior to measurement, calibration curves for EBT3 film in water and in solidwater were verified. EBT3 film was placed at different known locations around the applicator in the water tank. A CT scan of the phantom with applicator was performed using clinical protocol. A typical cervical cancer treatment plan was then generated by Oncentramore » brachytherapy planning system. A dose of 500 cGy was prescribed to point A (2 cm, 2 cm). Locations measured by film included the outer surface of the ring, measurement point A-m (2.2 cm, 2.2 cm), and profiles extending from point A-m parallel to the tandem. Three independent measurements were conducted. The doses recorded by film were carefully analyzed and compared with values calculated by the treatment planning system. Results: Assessment of the EBT3 films indicate that the dose at point A matches the values predicted by the planning system. Dose to the point A-m was 411.5 cGy, and the outer circumferential surface dose of the ring was between 500 and 1150 cGy. It was found that from the point A-m, the dose drops 60% within 4.5 cm on the line parallel to the tandem. The measurement doses agree with the treatment planning system. Conclusion: Use of EBT3 film is feasible for in-water measurements for brachytherapy. A carefully machined apparatus will likely improve measurement accuracy. In a typical plan, our study found that the ring surface dose can be 2.5 times larger than the point A prescription dose. EBT3 film can be used to monitor mucosal dose in brachytherapy treatments.« less

  12. Typical values of the electric drift E × B/B2 in the inner radiation belt and slot region as determined from Van Allen Probe measurements

    NASA Astrophysics Data System (ADS)

    Lejosne, Solène; Mozer, F. S.

    2016-12-01

    The electric drift E × B/B2 plays a fundamental role for the description of plasma flow and particle acceleration. Yet it is not well-known in the inner belt and slot region because of a lack of reliable in situ measurements. In this article, we present an analysis of the electric drifts measured below L 3 by both Van Allen Probes A and B from September 2012 to December 2014. The objective is to determine the typical components of the equatorial electric drift in both radial and azimuthal directions. The dependences of the components on radial distance, magnetic local time, and geographic longitude are examined. The results from Van Allen Probe A agree with Van Allen Probe B. They show, among other things, a typical corotation lag of the order of 5 to 10% below L 2.6, as well as a slight radial transport of the order of 20 m s-1. The magnetic local time dependence of the electric drift is consistent with that of the ionosphere wind dynamo below L 2 and with that of a solar wind-driven convection electric field above L 2. A secondary longitudinal dependence of the electric field is also found. Therefore, this work also demonstrates that the instruments on board Van Allen Probes are able to perform accurate measurements of the electric drift below L 3.

  13. Unusual Physical Properties of the Chicxulub Crater Peak Ring: Results from IODP/ICDP Expedition 364

    NASA Astrophysics Data System (ADS)

    Christeson, G. L.; Gebhardt, C.; Gulick, S. P. S.; Le Ber, E.; Lofi, J.; Morgan, J. V.; Nixon, C.; Rae, A.; Schmitt, D. R.

    2017-12-01

    IODP/ICDP Expedition 364 Hole M0077A drilled into the peak ring of the Chicxulub impact crater, recovering core between 505.7 and 1334.7 m below the seafloor (mbsf). Physical property measurements include wireline logging data, a vertical seismic profile (VSP), Multi-Sensor Core Logger (MSCL) measurements, and discrete sample measurements. The Hole M0077A peak ring rocks have unusual physical properties. Across the boundary between post-impact sediment and crater breccia we measure a sharp decrease in velocities and densities, and an increase in porosity. Mean crater breccia values are 3000-3300 m/s, 2.14-2.15 g/cm3, and 31% for velocity, density, and porosity, respectively. This zone is also associated with a low-frequency reflector package on MCS profiles and a low-velocity layer in FWI images, both confirmed from the VSP dataset. The thin (24 m) crater melt unit has mean velocity measurements of 3800-4150 m/s, density measurements of 2.32-2.34 g/cm3, and porosity measurements of 20%; density and porosity values are intermediate between the overlying impact breccia and underlying granitic basement, while the velocity values are similar to those for the underlying basement. The Hole M0077A crater melt unit velocities and densities are considerably less than values of 5800 m/s and 2.68 g/cm3 measured at an onshore well located in the annular trough. The uplifted granitic peak ring materials have mean values of 4100-4200 m/s, 2.39-2.44 g/cm3, and 11% for compressional wave velocity, density, and porosity, respectively; these values differ significantly from typical granite which has higher velocities (5400-6000 m/s) and densities (2.62-2.67 g/cm3), and lower porosities (<1%). All Hole M0077A peak-ring velocity, density, and porosity measurements indicate considerable fracturing, and are consistent with numerical models for peak-ring formation.

  14. Improving xylem hydraulic conductivity measurements by correcting the error caused by passive water uptake.

    PubMed

    Torres-Ruiz, José M; Sperry, John S; Fernández, José E

    2012-10-01

    Xylem hydraulic conductivity (K) is typically defined as K = F/(P/L), where F is the flow rate through a xylem segment associated with an applied pressure gradient (P/L) along the segment. This definition assumes a linear flow-pressure relationship with a flow intercept (F(0)) of zero. While linearity is typically the case, there is often a non-zero F(0) that persists in the absence of leaks or evaporation and is caused by passive uptake of water by the sample. In this study, we determined the consequences of failing to account for non-zero F(0) for both K measurements and the use of K to estimate the vulnerability to xylem cavitation. We generated vulnerability curves for olive root samples (Olea europaea) by the centrifuge technique, measuring a maximally accurate reference K(ref) as the slope of a four-point F vs P/L relationship. The K(ref) was compared with three more rapid ways of estimating K. When F(0) was assumed to be zero, K was significantly under-estimated (average of -81.4 ± 4.7%), especially when K(ref) was low. Vulnerability curves derived from these under-estimated K values overestimated the vulnerability to cavitation. When non-zero F(0) was taken into account, whether it was measured or estimated, more accurate K values (relative to K(ref)) were obtained, and vulnerability curves indicated greater resistance to cavitation. We recommend accounting for non-zero F(0) for obtaining accurate estimates of K and cavitation resistance in hydraulic studies. Copyright © Physiologia Plantarum 2012.

  15. Results of the NIST National Ball Plate Round Robin.

    PubMed

    Caskey, G W; Phillips, S D; Borchardt, B R

    1997-01-01

    This report examines the results of the ball plate round robin administered by NIST. The round robin was part of an effort to assess the current state of industry practices for measurements made using coordinate measuring machines. Measurements of a two-dimensional ball plate (240 mm by 240 mm) on 41 coordinate measuring machines were collected and analyzed. Typically, the deviations of the reported X and Y coordinates from the calibrated values were within ± 5 μm, with some coordinate deviations exceeding 20.0 μm. One of the most significant observations from these data was that over 75 % of the participants failed to correctly estimate their measurement error on one or more of the ball plate spheres.

  16. Thermal insulation and clothing area factors of typical Arabian Gulf clothing ensembles for males and females: measurements using thermal manikins.

    PubMed

    Al-ajmi, F F; Loveday, D L; Bedwell, K H; Havenith, G

    2008-05-01

    The thermal insulation of clothing is one of the most important parameters used in the thermal comfort model adopted by the International Standards Organisation (ISO) [BS EN ISO 7730, 2005. Ergonomics of the thermal environment. Analytical determination and interpretation of thermal comfort using calculation of the PMV and PPD indices and local thermal comfort criteria. International Standardisation Organisation, Geneva.] and by ASHRAE [ASHRAE Handbook, 2005. Fundamentals. Chapter 8. American Society of Heating Refrigeration and Air-conditioning Engineers, Inc., 1791 Tullie Circle N.E., Atlanta, GA.]. To date, thermal insulation values of mainly Western clothing have been published with only minimal data being available for non-Western clothing. Thus, the objective of the present study is to measure and present the thermal insulation (clo) values of a number of Arabian Gulf garments as worn by males and females. The clothing ensembles and garments of Arabian Gulf males and females presented in this study are representative of those typically worn in the region during both summer and winter seasons. Measurements of total thermal insulation values (clo) were obtained using a male and a female shape thermal manikin in accordance with the definition of insulation as given in ISO 9920. In addition, the clothing area factors (f cl) determined in two different ways were compared. The first method used a photographic technique and the second a regression equation as proposed in ISO 9920, based on the insulation values of Arabian Gulf male and female garments and ensembles as they were determined in this study. In addition, fibre content, descriptions and weights of Arabian Gulf clothing have been recorded and tabulated in this study. The findings of this study are presented as additions to the existing knowledge base of clothing insulation, and provide for the first time data for Arabian Gulf clothing. The analysis showed that for these non-Western clothing designs, the most widely used regression calculation of f cl is not valid. However, despite the very large errors in f cl made with the regression method, the errors this causes in the intrinsic clothing insulation value, I cl, are limited.

  17. Measurement of pressure-broadening and lineshift coefficients at 77 and 296 K of methane lines in the 727 nm band using intracavity laser spectroscopy

    NASA Technical Reports Server (NTRS)

    Singh, Kuldip; O'Brien, James J.

    1994-01-01

    Pressure-broadening coefficients and pressure-induced lineshifts of several rotational-vibrational lines have been measured in the 727 nm absorption band of methane at temperatures of 77 and 296 K, using nitrogen, hydrogen, and helium as the foreign-gas collision partners. A technique involving intracavity laser spectroscopy is used to record the methane spectra. Average values of the broadening coefficients (/cm/atm) at 77 K are: 0.199, 0.139, 0.055, and 0.29 for collision partners N2, H2, He, and CH4, respectively. Typical average values of the pressure-induced lineshifts (/cm/atm) at 77 K and for the range of foreign gas pressures between 10 and 200 torr are -0.052 for N2, -0.063 for H2, and +0.031 for He. All the values obtained at 296 K are considerably different from the corresponding values at 77 K. This represents the first report of pressure-broadening and shifting coefficients for the methane transitions in a region where the delta nu(sub C-H) = 5 band occurs.

  18. Moisture interaction and stability of ZOT (Zinc Orthotitanate) thermal control spacecraft coating

    NASA Technical Reports Server (NTRS)

    Mon, Gordon R.; Gonzalez, Charles C.; Ross, Ronald G., Jr.; Wen, Liang C.; Odonnell, Timothy

    1988-01-01

    Two of the many performance requirements of the zinc orthotitanate (ZOT) ceramic thermal control paint covering parts of the Jupiter-bound Galileo spacecraft are that it be sufficiently electrically conductive so as to prevent electrostatic discharge (ESD) damage to onboard electronics and that it adhere to and protect the substrate from corrosion in terrestrial environments. The bulk electrical resistivity of ZOT on an aluminum substrate was measured over the ranges 22 C to 90 C and 0 percent RH to 100 percent RH, and also in soft (10 (minus 2) Torr) and hard (10 (minus 7) Torr) vacuums. No significant temperature dependence was evident, but measured resistivity values ranged over 9 orders of magnitude: 10 to the 5th power ohm-cm at 100 percent RH greater than 10 to the 12th power ohm-cm in a hard vacuum. The latter value violates the ESD criterion for a typical 0.019 cm thick coating. The corrosion study involved exposing typical ZOT substrate combinations to two moisture environments - 30 C/85 percent RH and 85 C/85 percent RH - for 2000 hours, during which time the samples were periodically removed for front-to-back electrical resistance and scratch/peel test measurements. It was determined that the ZOT/Al and ZOT/Mg systems are stable (no ZOT delamination), although some corrosion (oxide formation) and resistivity increases observed among the ZOT/Mg samples warrant that exposure of some parts to humid environments be minimized.

  19. Environmental monitoring through use of silica-based TLD.

    PubMed

    Rozaila, Z Siti; Khandaker, M U; Abdul Sani, S F; Sabtu, Siti Norbaini; Amin, Y M; Maah, M J; Bradley, D A

    2017-09-25

    The sensitivity of a novel silica-based fibre-form thermoluminescence dosimeter was tested off-site of a rare-earths processing plant, investigating the potential for obtaining baseline measurements of naturally occurring radioactive materials. The dosimeter, a Ge-doped collapsed photonic crystal fibre (PCFc) co-doped with B, was calibrated against commercially available thermoluminescent dosimetry (TLD) (TLD-200 and TLD-100) using a bremsstrahlung (tube-based) x-ray source. Eight sampling sites within 1 to 20 km of the perimeter of the rare-earth facility were identified, the TLDs (silica- as well as TLD-200 and TLD-100) in each case being buried within the soil at fixed depth, allowing measurements to be obtained, in this case for protracted periods of exposure of between two to eight months. The values of the dose were then compared against values projected on the basis of radioactivity measurements of the associated soils, obtained via high-purity germanium gamma-ray spectrometry. Accord was found in relative terms between the TL evaluations at each site and the associated spectroscopic results. Thus said, in absolute terms, the TL evaluated doses were typically less than those derived from gamma-ray spectroscopy, by ∼50% in the case of PCFc-Ge. Gamma spectrometry analysis typically provided an upper limit to the projected dose, and the Marinelli beaker contents were formed from sieving to provide a homogenous well-packed medium. However, with the radioactivity per unit mass typically greater for smaller particles, with preferential adsorption on the surface and the surface area per unit volume increasing with decrease in radius, this made for an elevated dose estimate. Prevailing concentrations of key naturally occurring radionuclides in soil, 226 Ra, 232 Th and 40 K, were also determined, together with radiological dose evaluation. To date, the area under investigation, although including a rare-earth processing facility, gives no cause for concern from radiological impact. The current study reveals the suitability of the optical fibre based micro-dosimeter for all-weather monitoring of low-level environmental radioactivity.

  20. Established dietary estimates of net acid production do not predict measured net acid excretion in patients with Type 2 diabetes on Paleolithic-Hunter-Gatherer-type diets.

    PubMed

    Frassetto, L A; Shi, L; Schloetter, M; Sebastian, A; Remer, T

    2013-09-01

    Formulas developed to estimate diet-dependent net acid excretion (NAE) generally agree with measured values for typical Western diets. Whether they can also appropriately predict NAE for 'Paleolithic-type' (Paleo) diets-which contain very high amounts of fruits and vegetables (F&V) and concurrent high amounts of protein is unknown. Here, we compare measured NAEs with established NAE estimates in subjects with Type 2 diabetes (T2D). Thirteen subjects with well-controlled T2D were randomized to either a Paleo or American Diabetes Association (ADA) diet for 14 days. Twenty-four hour urine collections were performed at baseline and end of the diet period, and analyzed for titratable acid, bicarbonate and ammonium to calculate measured NAE. Three formulas for estimating NAE from dietary intake were used; two (NAE_diet R or L) that include dietary mineral intake and sulfate- and organic acid (OA) production, and one that is empirically derived (NAE_diet F) only considering potassium and protein intake. Measured NAE on the Paleo diet was significantly lower than on the ADA-diet (+31±22 vs 112±52 mEq/day, P=0.002). Although all formula estimates showed similar and reasonable correlations (r=0.52-0.76) with measured NAE, each one underestimated measured values. The formula with the best correlation did not contain an estimate of dietary OA production. Paleo-diets are lower in NAE than typical Western diets. However, commonly used formulas clearly underestimate NAE, especially for diets with very high F&V (as the Paleo diet), and in subjects with T2D. This may be due to an inappropriate estimation of proton loads stemming from OAs, underlining the necessity for improved measures of OA-related proton sources.

  1. Established dietary estimates of net acid production do not predict measured net acid excretion in patients with Type 2 diabetes on Paleolithic-Hunter-Gatherer-type diets

    PubMed Central

    Frassetto, Lynda A; Shi, Lijie; Schloetter, Monique; Sebastian, Anthony; Remer, Thomas

    2014-01-01

    Background Formulas developed to estimate diet-dependent net acid excretion (NAE) generally agree with measured values for typical Western diets. Whether they can also appropriately predict NAE for "Paleolithic-type" (Paleo) diets – which contain very high amounts of fruits and vegetables (F&V) and concurrent high amounts of protein is unknown. Here we compare measured NAEs with established NAE-estimates in subjects with Type 2 diabetes (T2D). Methods Thirteen subjects with well controlled T2D were randomized to either a Paleo or American Diabetes Association (ADA) diet for 14 days. 24-hour urine collections were performed at baseline and end of the diet period, and analyzed for titratable acid, bicarbonate, and ammonium to calculate measured NAE. Three formulas for estimating NAE from dietary intake were used; two (NAE_diet R or L) that include dietary mineral intake and sulfate- and organic acid (OA) production, and one that is empirically-derived (NAE_diet F) only considering potassium and protein intake. Results Measured NAE on the Paleo diet was significantly lower than on the ADA diet (+31±22 vs. 112±52 mEq/day, p=0.002). Although all formula estimates showed similar and reasonable correlations (r=0.52–0.76) with measured NAE, each one underestimated measured values. The formula with the best correlation did not contain an estimate of dietary organic acid production. Conclusions Paleo diets are lower in NAE than typical Western diets. However, commonly used formulas clearly underestimate NAE, especially for diets with very high F&V (as the Paleo diet), and in subjects with T2D. This may be due to an inappropriate estimation of proton loads stemming from OAs, underlining the necessity for improved measures of OA-related proton sources. PMID:23859996

  2. The dielectric properties of human pineal gland tissue and RF absorption due to wireless communication devices in the frequency range 400-1850 MHz.

    PubMed

    Schmid, Gernot; Uberbacher, Richard; Samaras, Theodoros; Tschabitscher, Manfred; Mazal, Peter R

    2007-09-07

    In order to enable a detailed analysis of radio frequency (RF) absorption in the human pineal gland, the dielectric properties of a sample of 20 freshly removed pineal glands were measured less than 20 h after death. Furthermore, a corresponding high resolution numerical model of the brain region surrounding the pineal gland was developed, based on a real human tissue sample. After inserting this model into a commercially available numerical head model, FDTD-based computations for exposure scenarios with generic models of handheld devices operated close to the head in the frequency range 400-1850 MHz were carried out. For typical output power values of real handheld mobile communication devices, the obtained results showed only very small amounts of absorbed RF power in the pineal gland when compared to SAR limits according to international safety standards. The highest absorption was found for the 400 MHz irradiation. In this case the RF power absorbed inside the pineal gland (organ mass 96 mg) was as low as 11 microW, when considering a device of 500 mW output power operated close to the ear. For typical mobile phone frequencies (900 MHz and 1850 MHz) and output power values (250 mW and 125 mW) the corresponding values of absorbed RF power in the pineal gland were found to be lower by a factor of 4.2 and 36, respectively. These results indicate that temperature-related biologically relevant effects on the pineal gland induced by the RF emissions of typical handheld mobile communication devices are unlikely.

  3. Stability of Gradient Field Corrections for Quantitative Diffusion MRI.

    PubMed

    Rogers, Baxter P; Blaber, Justin; Welch, E Brian; Ding, Zhaohua; Anderson, Adam W; Landman, Bennett A

    2017-02-11

    In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fields, we predicted the obtained b-values and applied gradient directions throughout a typical field of view for brain imaging for a typical 32-direction diffusion imaging sequence. We measured the stability of these predictions over time. At 80 mm from scanner isocenter, predicted b-value was 1-6% different than intended due to gradient nonlinearity, and predicted gradient directions were in error by up to 1 degree. Over the course of one month the change in these quantities due to calibration-related factors such as scanner drift and variation in phantom placement was <0.5% for b-values, and <0.5 degrees for angular deviation. The proposed calibration procedure allows the estimation of gradient nonlinearity to correct b-values and gradient directions ahead of advanced diffusion image processing for high angular resolution data, and requires only a five-minute phantom scan that can be included in a weekly or monthly quality assurance protocol.

  4. The health-nutrition dimension: a methodological approach to assess the nutritional sustainability of typical agro-food products and the Mediterranean diet.

    PubMed

    Azzini, Elena; Maiani, Giuseppe; Turrini, Aida; Intorre, Federica; Lo Feudo, Gabriella; Capone, Roberto; Bottalico, Francesco; El Bilali, Hamid; Polito, Angela

    2018-08-01

    The aim of this paper is to provide a methodological approach to evaluate the nutritional sustainability of typical agro-food products, representing Mediterranean eating habits and included in the Mediterranean food pyramid. For each group of foods, suitable and easily measurable indicators were identified. Two macro-indicators were used to assess the nutritional sustainability of each product. The first macro-indicator, called 'business distinctiveness', takes into account the application of different regulations and standards regarding quality, safety and traceability as well as the origin of raw materials. The second macro-indicator, called 'nutritional quality', assesses product nutritional quality taking into account the contents of key compounds including micronutrients and bioactive phytochemicals. For each indicator a 0-10 scoring system was set up, with scores from 0 (unsustainable) to 10 (very sustainable), with 5 as a sustainability benchmark value. The benchmark value is the value from which a product can be considered sustainable. A simple formula was developed to produce a sustainability index. The proposed sustainability index could be considered a useful tool to describe both the qualitative and quantitative value of micronutrients and bioactive phytochemical present in foodstuffs. This methodological approach can also be applied beyond the Mediterranean, to food products in other world regions. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  5. Aerosol effects on the UV irradiance in Santiago de Chile

    NASA Astrophysics Data System (ADS)

    Cordero, R. R.; Seckmeyer, G.; Damiani, A.; Jorquera, J.; Carrasco, J.; Muñoz, R.; Da Silva, L.; Labbe, F.; Laroze, D.

    2014-11-01

    Santiago de Chile (33°27‧ S-70°41‧ W) is a mid-latitude city of 6 million inhabitants with a complicated surrounding topography. Aerosol extinction in Santiago is determined by the semi-arid local climate, the urban pollution, a regional subsidence thermal inversion layer, and the boundary-layer wind airflow. In this paper we report on spectral measurements of the surface irradiance (at 290-600 nm wavelength range) carried out during 2013 in the heart of the city by using a double monochromator-based spectroradiometer system. These measurements were used to assess the effect of local aerosols, paying particular attention to the ultraviolet (UV) range. We found that the aerosol optical depth (AOD) exhibited variations likely related to changes in the subsidence thermal inversion and in the boundary-layer winds. Although the AOD at 350 nm typically ranged from 0.2 to 0.3, peak values of about 0.7 were measured. The AOD diminished with the wavelength and typically ranged from 0.1 to 0.2 at 550 nm. Our AOD data were found to be consistent with measurements of the particulate matter (PM) mass concentration.

  6. Experimental Evaluation of Adaptive Modulation and Coding in MIMO WiMAX with Limited Feedback

    NASA Astrophysics Data System (ADS)

    Mehlführer, Christian; Caban, Sebastian; Rupp, Markus

    2007-12-01

    We evaluate the throughput performance of an OFDM WiMAX (IEEE 802.16-2004, Section 8.3) transmission system with adaptive modulation and coding (AMC) by outdoor measurements. The standard compliant AMC utilizes a 3-bit feedback for SISO and Alamouti coded MIMO transmissions. By applying a 6-bit feedback and spatial multiplexing with individual AMC on the two transmit antennas, the data throughput can be increased significantly for large SNR values. Our measurements show that at small SNR values, a single antenna transmission often outperforms an Alamouti transmission. We found that this effect is caused by the asymmetric behavior of the wireless channel and by poor channel knowledge in the two-transmit-antenna case. Our performance evaluation is based on a measurement campaign employing the Vienna MIMO testbed. The measurement scenarios include typical outdoor-to-indoor NLOS, outdoor-to-outdoor NLOS, as well as outdoor-to-indoor LOS connections. We found that in all these scenarios, the measured throughput is far from its achievable maximum; the loss is mainly caused by a too simple convolutional coding.

  7. Magnetic field `flyby' measurement using a smartphone's magnetometer and accelerometer simultaneously

    NASA Astrophysics Data System (ADS)

    Monteiro, Martín; Stari, Cecilia; Cabeza, Cecilia; Marti, Arturo C.

    2017-12-01

    The spatial dependence of magnetic fields in simple configurations is a common topic in introductory electromagnetism lessons, both in high school and in university courses. In typical experiments, magnetic fields and distances are obtained taking point-by-point values using a Hall sensor and a ruler, respectively. Here, we show how to take advantage of the smartphone capabilities to get simultaneous measures with the built-in accelerometer and magnetometer and to obtain the spatial dependence of magnetic fields. We consider a simple setup consisting of a smartphone mounted on a track whose direction coincides with the axis of a coil. While the smartphone is moving on the track, both the magnetic field and the distance from the center of the coil (integrated numerically from the acceleration values) are simultaneously obtained. This methodology can easily be extended to more complicated setups.

  8. The Application of 3D Laser Scanning in the Survey and Measuring of Guyue Bridge of Song Dynasty in Yiwu City

    NASA Astrophysics Data System (ADS)

    Lu, N.; Wang, Q.; Wang, S.; Zhang, R.

    2015-08-01

    It is believed that folding-arch is the transitional form from beam to curved arch. Guyue Bridge, built in JiaDing 6year (A.D 1213) of Southern Song Dynasty, located in Yiwu City, Zhejiang Province in China, is one of typical objective examples for this transition. It possesses high historical, scientific, artistic, cultural and social values. Facing severe environmental problems and deteriorated heritage situation, our conservation team selected 3D laser scanning as basic recording method, then acquired the precise threedimensional model. Measured the fundamental dimension and components' sizes, we analysed its stable state. Moreover, combined with historic documents, we reasonably speculated and calculated the original sizes and important scales at the building time. These findings have significant research values as well as evidential meanings for future conservation.

  9. Supporting data for hydrologic studies in San Francisco Bay, California; meteorological measurements at the Port of Redwood City during 1992-1994

    USGS Publications Warehouse

    Schemel, Laurence E.

    1995-01-01

    Meteorological data were collected during 1992-94 at the Port of Redwood City, California, to support hydrologic studies in southern San Francisco Bay. The meteorological variables that were measured were air temperature, atmospheric pressure, quantum flux (insolation), and four parameters of wind speed and direction: scalar mean horizontal wind speed, (vector) resultant horizontal wind speed, resultant wind direction, and standard deviation of the wind direction. Hourly mean values based on measurements at five-minute intervals were logged at the site, then transferred to a portable computer monthly. Daily mean values were computed for temperature, insolation, pressure, and scalar wind speed. Hourly- mean and daily-mean values are presented in time- series plots and daily variability and seasonal and annual cycles are described. All data are provided in ASCII files on an IBM-formatted disk. Observations of temperature and wind speed at the Port of Redwood City were compared with measurements made at the San Francisco International Airport. Most daily mean values for temperature agreed within one- to two-tenths of a degree Celsius between the two locations. Daily mean wind speeds at the Port of Redwood City were typically half the values at the San Francisco International Airport. During summers, the differences resulted from stronger wind speeds at the San Francisco International Airport occurring over longer periods of each day. A comparison of hourly wind speeds at the Palo Alto Municipal Airport with those at the Port of Redwood City showed that values were similar in magnitude.

  10. Measuring milk fat content by random laser emission

    NASA Astrophysics Data System (ADS)

    Abegão, Luis M. G.; Pagani, Alessandra A. C.; Zílio, Sérgio C.; Alencar, Márcio A. R. C.; Rodrigues, José J.

    2016-10-01

    The luminescence spectra of milk containing rhodamine 6G are shown to exhibit typical signatures of random lasing when excited with 532 nm laser pulses. Experiments carried out on whole and skim forms of two commercial brands of UHT milk, with fat volume concentrations ranging from 0 to 4%, presented lasing threshold values dependent on the fat concentration, suggesting that a random laser technique can be developed to monitor such important parameter.

  11. Measuring milk fat content by random laser emission.

    PubMed

    Abegão, Luis M G; Pagani, Alessandra A C; Zílio, Sérgio C; Alencar, Márcio A R C; Rodrigues, José J

    2016-10-12

    The luminescence spectra of milk containing rhodamine 6G are shown to exhibit typical signatures of random lasing when excited with 532 nm laser pulses. Experiments carried out on whole and skim forms of two commercial brands of UHT milk, with fat volume concentrations ranging from 0 to 4%, presented lasing threshold values dependent on the fat concentration, suggesting that a random laser technique can be developed to monitor such important parameter.

  12. High breakdown electric field in β-Ga2O3/graphene vertical barristor heterostructure

    NASA Astrophysics Data System (ADS)

    Yan, Xiaodong; Esqueda, Ivan S.; Ma, Jiahui; Tice, Jesse; Wang, Han

    2018-01-01

    In this work, we study the high critical breakdown field in β-Ga2O3 perpendicular to its (100) crystal plane using a β-Ga2O3/graphene vertical heterostructure. Measurements indicate a record breakdown field of 5.2 MV/cm perpendicular to the (100) plane that is significantly larger than the previously reported values on lateral β-Ga2O3 field-effect-transistors (FETs). This result is compared with the critical field typically measured within the (100) crystal plane, and the observed anisotropy is explained through a combined theoretical and experimental analysis.

  13. GRANULATION IN THE PHOTOSPHERE OF {zeta} CYGNI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, David F., E-mail: dfgray@uwo.ca

    2012-05-15

    A series of 35 high-resolution spectra are used to measure the third-signature plot of the G8 III star, {zeta} Cygni, which shows convective velocities only 8% larger than the Sun. Bisector mapping yields a flux deficit, a measure of granulation contrast, typical of other giants. The observations also give radial velocities with errors {approx}30 m s{sup -1} and allow the orbit to be refined. Velocity excursions relative to the smooth orbital motion, possibly from the granulation, have values exceeding 200 m s{sup -1}. Temperature variations were looked for using line-depth ratios, but none were found.

  14. A Neighborhood Wealth Metric for Use in Health Studies

    PubMed Central

    Moudon, Anne Vernez; Cook, Andrea J.; Ulmer, Jared; Hurvitz, Philip M.; Drewnowski, Adam

    2011-01-01

    Background Measures of neighborhood deprivation used in health research are typically based on conventional area-based SES. Purpose The aim of this study is to examine new data and measures of SES for use in health research. Specifically, assessed property values are introduced as a new individual-level metric of wealth and tested for their ability to substitute for conventional area-based SES as measures of neighborhood deprivation. Methods The analysis was conducted in 2010 using data from 1922 participants in the 2008– 2009 survey of the Seattle Obesity Study (SOS). It compared the relative strength of the association between the individual-level neighborhood wealth metric (assessed property values) and area-level SES measures (including education, income, and percentage above poverty as single variables, and as the composite Singh index) on the binary outcome fair/poor general health status. Analyses were adjusted for gender, categoric age, race, employment status, home ownership, and household income. Results The neighborhood wealth measure was more predictive of fair/poor health status than area-level SES measures, calculated either as single variables or as indices (lower DIC measures for all models). The odds of having a fair/poor health status decreased by 0.85 [0.77, 0.93] per $50,000 increase in neighborhood property values after adjusting for individual-level SES measures. Conclusions The proposed individual-level metric of neighborhood wealth, if replicated in other areas, could replace area-based SES measures, thus simplifying analyses of contextual effects on health. PMID:21665069

  15. Atmospheric trace element concentrations in total suspended particles near Paris, France

    NASA Astrophysics Data System (ADS)

    Ayrault, Sophie; Senhou, Abderrahmane; Moskura, Mélanie; Gaudry, André

    2010-09-01

    To evaluate today's trace element atmospheric concentrations in large urban areas, an atmospheric survey was carried out for 18 months, from March 2002 to September 2003, in Saclay, nearby Paris. The total suspended particulate matter (TSP) was collected continuously on quartz fibre filters. The TSP contents were determined for 36 elements (including Ag, Bi, Mo and Sb) using two analytical methods: Instrumental Neutron Activation Analysis (INAA) and Inductively Coupled Plasma Mass Spectrometry (ICP-MS). The measured concentrations were in agreement within the uncertainties with the certified values for the polycarbonate reference material filter SRM-2783 (National Institute for Standard Technology NIST, USA). The measured concentrations were significantly lower than the recommended atmospheric concentrations. In 2003, the Pb atmospheric level at Saclay was 15 ng/m 3, compared to the 500 ng/m 3 guideline level and to the 200 ng/m 3 observed value in 1994. The typical urban background TSP values of 1-2, 0.2-1, 4-6, 10-30 and 3-5 ng/m 3 for As, Co, Cr, Cu and Sb, respectively, were inferred from this study and were compared with the literature data. The typical urban background TSP concentrations could not be realised for Cd, Pb and Zn, since these air concentrations are highly influenced by local features. The Zn concentrations and Zn/Pb ratio observed in Saclay represented a characteristic fingerprint of the exceptionally large extent of zinc-made roofs in Paris and its suburbs. The traffic-related origin of Ba, Cr, Cu, Pb and Sb was demonstrated, while the atmospheric source(s) of Ag was not identified.

  16. Measurement of glottal cycle characteristics between children and adults: Physiological Variations

    PubMed Central

    Patel, Rita R.; Dubrovskiy, Denis; Döllinger, Michael

    2014-01-01

    Objective The aim of this study is to quantify phases of the vibratory cycle using measurements of glottal cycle quotients and glottal cycle derivatives, in typically developing pre-pubertal children and young adults with use of high speed digital imaging (HSDI). Method Vocal fold vibrations were recorded from 27 children (age range 5–9 years) and 35 adults (age range 21–45 years), with HSDI at 4000 frames per second for sustained phonation. Glottal area waveform (GAW) measures of Open Quotient (OQ), Closing Quotient (CQ), Speed Index (SI), Rate Quotient (RQ) and Asymmetry Quotient (AsyQ) were computed. Glottal cycle derivatives of Amplitude Quotient (AQ) and Maximum Area Declination Rate (MADR) were also computed. Group differences (adult females, adult males, and children) were statistically investigated for mean and standard deviation values of the glottal cycle quotients and glottal cycle derivatives. Results Children exhibited higher values of Speed Index, Asymmetry Quotient and lower MADR compared to adult males. Children exhibited the highest mean value and lowest variability in Amplitude Quotient compared to adult males and females. Adult males showed lower values of Speed Index, Asymmetry Quotient, Amplitude Quotient and higher values of MADR compared to adult females. Conclusion Glottal cycle vibratory motion in children is functionally different compared to adult males and females; suggesting the need for development of children specific norms for both normal and disordered voice qualities. PMID:24629646

  17. Consensus building for interlaboratory studies, key comparisons, and meta-analysis

    NASA Astrophysics Data System (ADS)

    Koepke, Amanda; Lafarge, Thomas; Possolo, Antonio; Toman, Blaza

    2017-06-01

    Interlaboratory studies in measurement science, including key comparisons, and meta-analyses in several fields, including medicine, serve to intercompare measurement results obtained independently, and typically produce a consensus value for the common measurand that blends the values measured by the participants. Since interlaboratory studies and meta-analyses reveal and quantify differences between measured values, regardless of the underlying causes for such differences, they also provide so-called ‘top-down’ evaluations of measurement uncertainty. Measured values are often substantially over-dispersed by comparison with their individual, stated uncertainties, thus suggesting the existence of yet unrecognized sources of uncertainty (dark uncertainty). We contrast two different approaches to take dark uncertainty into account both in the computation of consensus values and in the evaluation of the associated uncertainty, which have traditionally been preferred by different scientific communities. One inflates the stated uncertainties by a multiplicative factor. The other adds laboratory-specific ‘effects’ to the value of the measurand. After distinguishing what we call recipe-based and model-based approaches to data reductions in interlaboratory studies, we state six guiding principles that should inform such reductions. These principles favor model-based approaches that expose and facilitate the critical assessment of validating assumptions, and give preeminence to substantive criteria to determine which measurement results to include, and which to exclude, as opposed to purely statistical considerations, and also how to weigh them. Following an overview of maximum likelihood methods, three general purpose procedures for data reduction are described in detail, including explanations of how the consensus value and degrees of equivalence are computed, and the associated uncertainty evaluated: the DerSimonian-Laird procedure; a hierarchical Bayesian procedure; and the Linear Pool. These three procedures have been implemented and made widely accessible in a Web-based application (NIST Consensus Builder). We illustrate principles, statistical models, and data reduction procedures in four examples: (i) the measurement of the Newtonian constant of gravitation; (ii) the measurement of the half-lives of radioactive isotopes of caesium and strontium; (iii) the comparison of two alternative treatments for carotid artery stenosis; and (iv) a key comparison where the measurand was the calibration factor of a radio-frequency power sensor.

  18. CO2-Controllable Foaming and Emulsification Properties of the Stearic Acid Soap Systems.

    PubMed

    Xu, Wenlong; Gu, Hongyao; Zhu, Xionglu; Zhong, Yingping; Jiang, Liwen; Xu, Mengxin; Song, Aixin; Hao, Jingcheng

    2015-06-02

    Fatty acids, as a typical example of stearic acid, are a kind of cheap surfactant and have important applications. The challenging problem of industrial applications is their solubility. Herein, three organic amines-ethanolamine (EA), diethanolamine (DEA), and triethanolamine (TEA)-were used as counterions to increase the solubility of stearic acid, and the phase behaviors were investigated systematically. The phase diagrams were delineated at 25 and 50 °C, respectively. The phase-transition temperature was measured by differential scanning calorimetry (DSC) measurements, and the microstructures were vesicles and planar sheets observed by cryogenic transmission electron microscopy (cryo-TEM) observations. The apparent viscosity of the samples was determined by rheological characterizations. The values, rcmc, for the three systems were less than 30 mN·m(-1). Typical samples of bilayers used as foaming agents and emulsifiers were investigated for the foaming and emulsification assays. CO2 was introduced to change the solubility of stearic acid, inducing the transition of their surface activity and further achieving the goal of defoaming and demulsification.

  19. Computing diffuse fraction of global horizontal solar radiation: A model comparison.

    PubMed

    Dervishi, Sokol; Mahdavi, Ardeshir

    2012-06-01

    For simulation-based prediction of buildings' energy use or expected gains from building-integrated solar energy systems, information on both direct and diffuse component of solar radiation is necessary. Available measured data are, however, typically restricted to global horizontal irradiance. There have been thus many efforts in the past to develop algorithms for the derivation of the diffuse fraction of solar irradiance. In this context, the present paper compares eight models for estimating diffuse fraction of irradiance based on a database of measured irradiance from Vienna, Austria. These models generally involve mathematical formulations with multiple coefficients whose values are typically valid for a specific location. Subsequent to a first comparison of these eight models, three better performing models were selected for a more detailed analysis. Thereby, the coefficients of the models were modified to account for Vienna data. The results suggest that some models can provide relatively reliable estimations of the diffuse fractions of the global irradiance. The calibration procedure could only slightly improve the models' performance.

  20. Impact of haze-fog days to radon progeny equilibrium factor and discussion of related factors.

    PubMed

    Hou, Changsong; Shang, Bing; Zhang, Qingzhao; Cui, Hongxing; Wu, Yunyun; Deng, Jun

    2015-11-01

    The equilibrium factor F between radon and its short-lived progenies is an important parameter to estimate radon exposure of humans. Therefore, indoor and outdoor concentrations of radon and its short-lived radon progeny were measured in Beijing area using a continuously measuring device, in an effort to obtain information on the F value. The results showed that the mean values of F were 0.58 ± 0.13 (0.25-0.95, n = 305) and 0.52 ± 0.12 (0.31-0.91, n = 64) for indoor and outdoor, respectively. The indoor F value during haze-fog days was higher than the typical value of 0.4 recommended by the United Nations Scientific Committee on the Effects of Atomic Radiation, and it was also higher than the values of 0.47 and 0.49 reported in the literature. A positive correlation was observed between indoor F values and PM2.5 concentrations (R (2) = 0.71). Since 2013, owing to frequent heavy haze-fog events in Beijing and surrounding areas, the number of the days with severe pollution remains at a high level. Future studies on the impact of the ambient fine particulate matter on indoor radon progeny equilibrium factor F could be important.

  1. The comparative effectiveness and cost-effectiveness of vitreoretinal interventions.

    PubMed

    Brown, Melissa M; Brown, Gary C; Brown, Heidi C; Irwin, Blair; Brown, Kathryn S

    2008-05-01

    The comparative effectiveness of medical interventions has recently been emphasized in the literature, typically for interventions in a similar class. Value-based medicine, the practice of medicine based on the value (improvement in quality of life and/or length of life) conferred by medical interventions, allows a measure of comparative effectiveness of interventions across all of health care, no matter how disparate. This report discusses recent comparative effectiveness studies in the vitreoretinal literature. Vitreoretinal interventions have good to excellent comparative effectiveness compared with commonly utilized interventions across health care, such as treatment for osteoporosis and hyperlipidemia. They also tend to be cost-effective when an upper limit of $100 000/quality-adjusted life-year is utilized. Value can be measured using either or both of two outcomes - the quality-adjusted life-year gain and/or the percentage improvement in value - both of which allow for an evaluation of comparative effectiveness, which can be compared on the same scale for every intervention. This value can also be integrated with costs using the outcome of dollars expended per quality-adjusted life-year ($/quality-adjusted life-year, or the cost-utility ratio), which allows a comparison of cost-effectiveness across all interventions. The majority of vitreoretinal interventions confer considerable value and are cost-effective.

  2. Compendium of Experimental Cetane Numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanowitz, J.; Ratcliff, M. A.; McCormick, R. L.

    This report is an updated version of the 2004 Compendium of Experimental Cetane Number Data and presents a compilation of measured cetane numbers for pure chemical compounds. It includes all available single compound cetane number data found in the scientific literature up until March 2014 as well as a number of unpublished values, most measured over the past decade at the National Renewable Energy Laboratory. This Compendium contains cetane values for 389 pure compounds, including 189 hydrocarbons and 201 oxygenates. More than 250 individual measurements are new to this version of the Compendium. For many compounds, numerous measurements are included,more » often collected by different researchers using different methods. Cetane number is a relative ranking of a fuel's autoignition characteristics for use in compression ignition engines; it is based on the amount of time between fuel injection and ignition, also known as ignition delay. The cetane number is typically measured either in a single-cylinder engine or a constant volume combustion chamber. Values in the previous Compendium derived from octane numbers have been removed, and replaced with a brief analysis of the correlation between cetane numbers and octane numbers. The discussion on the accuracy and precision of the most commonly used methods for measuring cetane has been expanded and the data has been annotated extensively to provide additional information that will help the reader judge the relative reliability of individual results.« less

  3. Marine Stratocumulus Cloud Fields off the Coast of Southern California Observed Using LANDSAT Imagery. Part II: Textural Analysis.

    NASA Astrophysics Data System (ADS)

    Welch, R. M.; Sengupta, S. K.; Kuo, K. S.

    1988-04-01

    Statistical measures of the spatial distributions of gray levels (cloud reflectivities) are determined for LANDSAT Multispectral Scanner digital data. Textural properties for twelve stratocumulus cloud fields, seven cumulus fields, and two cirrus fields are examined using the Spatial Gray Level Co-Occurrence Matrix method. The co-occurrence statistics are computed for pixel separations ranging from 57 m to 29 km and at angles of 0°, 45°, 90° and 135°. Nine different textual measures are used to define the cloud field spatial relationships. However, the measures of contrast and correlation appear to be most useful in distinguishing cloud structure.Cloud field macrotexture describes general cloud field characteristics at distances greater than the size of typical cloud elements. It is determined from the spatial asymptotic values of the texture measures. The slope of the texture curves at small distances provides a measure of the microtexture of individual cloud cells. Cloud fields composed primarily of small cells have very steep slopes and reach their asymptotic values at short distances from the origin. As the cells composing the cloud field grow larger, the slope becomes more gradual and the asymptotic distance increases accordingly. Low asymptotic values of correlation show that stratocumulus cloud fields have no large scale organized structure.Besides the ability to distinguish cloud field structure, texture appears to be a potentially valuable tool in cloud classification. Stratocumulus clouds are characterized by low values of angular second moment and large values of entropy. Cirrus clouds appear to have extremely low values of contrast, low values of entropy, and very large values of correlation.Finally, we propose that sampled high spatial resolution satellite data be used in conjunction with coarser resolution operational satellite data to detect and identify cloud field structure and directionality and to locate regions of subresolution scale cloud contamination.

  4. The validation of a swimming turn wall-contact-time measurement system: a touchpad application reliability study.

    PubMed

    Brackley, Victoria; Ball, Kevin; Tor, Elaine

    2018-05-12

    The effectiveness of the swimming turn is highly influential to overall performance in competitive swimming. The push-off or wall contact, within the turn phase, is directly involved in determining the speed the swimmer leaves the wall. Therefore, it is paramount to develop reliable methods to measure the wall-contact-time during the turn phase for training and research purposes. The aim of this study was to determine the concurrent validity and reliability of the Pool Pad App to measure wall-contact-time during the freestyle and backstroke tumble turn. The wall-contact-times of nine elite and sub-elite participants were recorded during their regular training sessions. Concurrent validity statistics included the standardised typical error estimate, linear analysis and effect sizes while the intraclass correlating coefficient (ICC) was used for the reliability statistics. The standardised typical error estimate resulted in a moderate Cohen's d effect size with an R 2 value of 0.80 and the ICC between the Pool Pad and 2D video footage was 0.89. Despite these measurement differences, the results from this concurrent validity and reliability analyses demonstrated that the Pool Pad is suitable for measuring wall-contact-time during the freestyle and backstroke tumble turn within a training environment.

  5. Study on the initial value for the exterior orientation of the mobile version

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-jing; Li, Shi-liang

    2011-10-01

    Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.

  6. Value Production in a Collaborative Environment. Sociophysical Studies of Wikipedia

    NASA Astrophysics Data System (ADS)

    Yasseri, Taha; Kertész, János

    2013-05-01

    We review some recent endeavors and add some new results to characterize and understand underlying mechanisms in Wikipedia (WP), the paradigmatic example of collaborative value production. We analyzed the statistics of editorial activity in different languages and observed typical circadian and weekly patterns, which enabled us to estimate the geographical origins of contributions to WPs in languages spoken in several time zones. Using a recently introduced measure we showed that the editorial activities have intrinsic dependencies in the burstiness of events. A comparison of the English and Simple English WPs revealed important aspects of language complexity and showed how peer cooperation solved the task of enhancing readability. One of our focus issues was characterizing the conflicts or edit wars in WPs, which helped us to automatically filter out controversial pages. When studying the temporal evolution of the controversiality of such pages we identified typical patterns and classified conflicts accordingly. Our quantitative analysis provides the basis of modeling conflicts and their resolution in collaborative environments and contribute to the understanding of this issue, which becomes increasingly important with the development of information communication technology.

  7. Yeast and mammalian metabolism continuous monitoring by using pressure recording as an assessment technique for xenobiotic agent effects

    NASA Astrophysics Data System (ADS)

    Milani, Marziale; Ballerini, Monica; Ferraro, Lorenzo; Marelli, E.; Mazza, Francesca; Zabeo, Matteo

    2002-06-01

    Our work is devoted to the study of Saccharomyces cerevisiae and human lymphocytes cellular metabolism in order to develop a reference model to assess biological systems responses to chemical or physical agents exposure. CO2 variations inside test-tubes are measured by differential pressure sensors; pressure values are subsequently converted in voltage. The system allows to test up to 16 samples at the same time. Sampling manages up to 100 acquisitions per second. Values are recorded by a data acquisition card connected to a computer. This procedure leads to a standard curve (pressure variation versus time), typical of the cellular line, that describe cellular metabolism. The longest time lapse used is of 170 h. Different phases appear in this curve: an initial growth up to a maximum, followed by a decrement that leads to a typical depression (pressure value inside the test-tubes is lower than the initial one) after about 35 h from the beginning of yeast cells. The curve is reproducible within an experimental error of 4%. The analysis of many samples and the low cost of the devices allow a good statistical significance of the data. In particular as a test we will compare two sterilizing agents effects: UV radiation and amuchina.

  8. Reliance on God, prayer, and religion reduces influence of perceived norms on drinking.

    PubMed

    Neighbors, Clayton; Brown, Garrett A; Dibello, Angelo M; Rodriguez, Lindsey M; Foster, Dawn W

    2013-05-01

    Previous research has shown that perceived social norms are among the strongest predictors of drinking among young adults. Research has also consistently found religiousness to be protective against risk and negative health behaviors. The present research evaluates the extent to which reliance on God, prayer, and religion moderates the association between perceived social norms and drinking. Participants (n = 1,124 undergraduate students) completed a cross-sectional survey online, which included measures of perceived norms, religious values, and drinking. Perceived norms were assessed by asking participants their perceptions of typical student drinking. Drinking outcomes included drinks per week, drinking frequency, and typical quantity consumed. Regression analyses indicated that religiousness and perceived norms had significant unique associations in opposite directions for all three drinking outcomes. Significant interactions were evident between religiousness and perceived norms in predicting drinks per week, frequency, and typical quantity. In each case, the interactions indicated weaker associations between norms and drinking among those who assigned greater importance to religiousness. The extent of the relationship between perceived social norms and drinking was buffered by the degree to which students identified with religiousness. A growing body of literature has shown interventions including personalized feedback regarding social norms to be an effective strategy in reducing drinking among college students. The present research suggests that incorporating religious or spiritual values into student interventions may be a promising direction to pursue.

  9. Reliance on God, Prayer, and Religion Reduces Influence of Perceived Norms on Drinking

    PubMed Central

    Neighbors, Clayton; Brown, Garrett A.; Dibello, Angelo M.; Rodriguez, Lindsey M.; Foster, Dawn W.

    2013-01-01

    Objective: Previous research has shown that perceived social norms are among the strongest predictors of drinking among young adults. Research has also consistently found religiousness to be protective against risk and negative health behaviors. The present research evaluates the extent to which reliance on God, prayer, and religion moderates the association between perceived social norms and drinking. Method: Participants (n = 1,124 undergraduate students) completed a cross-sectional survey online, which included measures of perceived norms, religious values, and drinking. Perceived norms were assessed by asking participants their perceptions of typical student drinking. Drinking outcomes included drinks per week, drinking frequency, and typical quantity consumed. Results: Regression analyses indicated that religiousness and perceived norms had significant unique associations in opposite directions for all three drinking outcomes. Significant interactions were evident between religiousness and perceived norms in predicting drinks per week, frequency, and typical quantity. In each case, the interactions indicated weaker associations between norms and drinking among those who assigned greater importance to religiousness. Conclusions: The extent of the relationship between perceived social norms and drinking was buffered by the degree to which students identified with religiousness. A growing body of literature has shown interventions including personalized feedback regarding social norms to be an effective strategy in reducing drinking among college students. The present research suggests that incorporating religious or spiritual values into student interventions may be a promising direction to pursue. PMID:23490564

  10. Distribution of {Omega}{sub k} from the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Salem, Michael P.

    2010-04-15

    Our Universe may be contained in one among a diverging number of bubbles that nucleate within an eternally inflating multiverse. A promising measure to regulate the diverging spacetime volume of such a multiverse is the scale-factor cutoff, one feature of which is bubbles are not rewarded for having a longer duration of slow-roll inflation. Thus, depending on the landscape distribution of the number of e-folds of inflation among bubbles like ours, we might hope to measure spatial curvature. We study a recently proposed cartoon model of inflation in the landscape and find a reasonable chance (about 10%) that the curvaturemore » in our Universe is well above the value expected from cosmic variance. Anthropic selection does not strongly select for curvature as small as is observed (relative somewhat larger values), meaning the observational bound on curvature can be used to rule out landscape models that typically give too little inflation.« less

  11. Measurement of biochemical oxygen demand of the leachates.

    PubMed

    Fulazzaky, Mohamad Ali

    2013-06-01

    Biochemical oxygen demand (BOD) of the leachates originally from the different types of landfill sites was studied based on the data measured using the two manometric methods. The measurements of BOD using the dilution method were carried out to assess the typical physicochemical and biological characteristics of the leachates together with some other parameters. The linear regression analysis was used to predict rate constants for biochemical reactions and ultimate BOD values of the different leachates. The rate of a biochemical reaction implicated in microbial biodegradation of pollutants depends on the leachate characteristics, mass of contaminant in the leachate, and nature of the leachate. Character of leachate samples for BOD analysis of using the different methods may differ significantly during the experimental period, resulting in different BOD values. This work intends to verify effect of the different dilutions for the manometric method tests on the BOD concentrations of the leachate samples to contribute to the assessment of reaction rate and microbial consumption of oxygen.

  12. The influence of socio-cultural background and product value in usability testing.

    PubMed

    Sonderegger, Andreas; Sauer, Juergen

    2013-05-01

    This article examines the influence of socio-cultural background and product value on different outcomes of usability tests. A study was conducted in two different socio-cultural regions, Switzerland and East Germany, which differed in a number of aspects (e.g. economic power, price sensitivity and culture). Product value (high vs. low) was varied by manipulating the price of the product. Sixty-four test participants were asked to carry out five typical user tasks in the context of coffee machine usage, measuring performance, perceived usability, and emotion. The results showed that in Switzerland, high-value products were rated higher in usability than low-value products whereas in East Germany, high-value products were evaluated lower in usability. A similar interaction effect of socio-cultural background and product value was observed for user emotion. Implications are that the outcomes of usability tests do not allow for a simple transfer across cultures and that the mediating influence of perceived product value needs to be taken into consideration. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation

    NASA Astrophysics Data System (ADS)

    Nyabeze, W. R.

    A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.

  14. Winter measurements of trace gas and aerosol composition at a rural site in southern ontario

    NASA Astrophysics Data System (ADS)

    Daum, P. H.; Kelly, T. J.; Tanner, R. L.; Tang, X.; Anlauf, K.; Bottenheim, J.; Brice, K. A.; Wiebe, H. A.

    This paper reports the results of continuous measurements of concentrations of trace gas and aerosol species at Powassan, Ontario, a rural location in southern Ontario, from 20 January to 24 February 1984. The measurements included aerosol H + , NH 4+, Na +, Ca 2+ , NO 3-, SO 42- and Cl -, gaseous SO 2, NO, NO' y; ( = NO + NO2 + PAN + HNO3), HNO 3, PAN, and O 3. Average values of concentrations for key species during the project were: SO 2, 7.3 ppb; NO y, 7.5 ppb; HNO 3, 0.85 ppb; O 3, 33 ppb; NH 4+ 1.5 ppb; NO 3-, 0.4 ppb; and SO 42-, 0.9 ppb. Concentrations of primary pollutants (e.g. SO 2) were typically much higher, and concentrations of secondary species (e.g. SO 42-) typically lower, than observed at this location in summer. However, clear-air t- NO 3-/SO 42- ratios averaged 5-10 times higher in winter than in summer which suggests that HNO 3 is a more important source of atmospheric acidity, relative to SO 42- aerosol, in winter than in summer. Pollutant concentrations were highly variable; back trajectory calculations indicate that periods of high concentrations of both primary and secondary species were typically associated with air-mass back trajectories from the southern sectors while periods of low concentrations of secondary species were associated with back trajectories from the north. Comparison of these measurements with those at other locations suggests that concentrations at Powassan were characteristic of those prevailing over a much larger, possibly regional, area.

  15. Classification of autism spectrum disorder using supervised learning of brain connectivity measures extracted from synchrostates

    NASA Astrophysics Data System (ADS)

    Jamal, Wasifa; Das, Saptarshi; Oprescu, Ioana-Anastasia; Maharatna, Koushik; Apicella, Fabio; Sicca, Federico

    2014-08-01

    Objective. The paper investigates the presence of autism using the functional brain connectivity measures derived from electro-encephalogram (EEG) of children during face perception tasks. Approach. Phase synchronized patterns from 128-channel EEG signals are obtained for typical children and children with autism spectrum disorder (ASD). The phase synchronized states or synchrostates temporally switch amongst themselves as an underlying process for the completion of a particular cognitive task. We used 12 subjects in each group (ASD and typical) for analyzing their EEG while processing fearful, happy and neutral faces. The minimal and maximally occurring synchrostates for each subject are chosen for extraction of brain connectivity features, which are used for classification between these two groups of subjects. Among different supervised learning techniques, we here explored the discriminant analysis and support vector machine both with polynomial kernels for the classification task. Main results. The leave one out cross-validation of the classification algorithm gives 94.7% accuracy as the best performance with corresponding sensitivity and specificity values as 85.7% and 100% respectively. Significance. The proposed method gives high classification accuracies and outperforms other contemporary research results. The effectiveness of the proposed method for classification of autistic and typical children suggests the possibility of using it on a larger population to validate it for clinical practice.

  16. Flow resistance dynamics in step‐pool channels: 2. Partitioning between grain, spill, and woody debris resistance

    USGS Publications Warehouse

    Wilcox, Andrew C.; Nelson, Jonathan M.; Wohl, Ellen E.

    2006-01-01

    In step‐pool stream channels, flow resistance is created primarily by bed sediments, spill over step‐pool bed forms, and large woody debris (LWD). In order to measure resistance partitioning between grains, steps, and LWD in step‐pool channels we completed laboratory flume runs in which total resistance was measured with and without grains and steps, with various LWD configurations, and at multiple slopes and discharges. Tests of additive approaches to resistance partitioning found that partitioning estimates are highly sensitive to the order in which components are calculated and that such approaches inflate the values of difficult‐to‐measure components that are calculated by subtraction from measured components. This effect is especially significant where interactions between roughness features create synergistic increases in resistance such that total resistance measured for combinations of resistance components greatly exceeds the sum of those components measured separately. LWD contributes large proportions of total resistance by creating form drag on individual pieces and by increasing the spill resistance effect of steps. The combined effect of LWD and spill over steps was found to dominate total resistance, whereas grain roughness on step treads was a small component of total resistance. The relative contributions of grain, spill, and woody debris resistance were strongly influenced by discharge and to a lesser extent by LWD density. Grain resistance values based on published formulas and debris resistance values calculated using a cylinder drag approach typically underestimated analogous flume‐derived values, further illustrating sources of error in partitioning methods and the importance of accounting for interaction effects between resistance components.

  17. Flow resistance dynamics in step-pool channels: 2. Partitioning between grain, spill, and woody debris resistance

    NASA Astrophysics Data System (ADS)

    Wilcox, Andrew C.; Nelson, Jonathan M.; Wohl, Ellen E.

    2006-05-01

    In step-pool stream channels, flow resistance is created primarily by bed sediments, spill over step-pool bed forms, and large woody debris (LWD). In order to measure resistance partitioning between grains, steps, and LWD in step-pool channels we completed laboratory flume runs in which total resistance was measured with and without grains and steps, with various LWD configurations, and at multiple slopes and discharges. Tests of additive approaches to resistance partitioning found that partitioning estimates are highly sensitive to the order in which components are calculated and that such approaches inflate the values of difficult-to-measure components that are calculated by subtraction from measured components. This effect is especially significant where interactions between roughness features create synergistic increases in resistance such that total resistance measured for combinations of resistance components greatly exceeds the sum of those components measured separately. LWD contributes large proportions of total resistance by creating form drag on individual pieces and by increasing the spill resistance effect of steps. The combined effect of LWD and spill over steps was found to dominate total resistance, whereas grain roughness on step treads was a small component of total resistance. The relative contributions of grain, spill, and woody debris resistance were strongly influenced by discharge and to a lesser extent by LWD density. Grain resistance values based on published formulas and debris resistance values calculated using a cylinder drag approach typically underestimated analogous flume-derived values, further illustrating sources of error in partitioning methods and the importance of accounting for interaction effects between resistance components.

  18. Installation Effects on Heat Transfer Measurements for a Turbine Vane

    DTIC Science & Technology

    2003-03-01

    turbine vanes and blades in order to acquire high accuracy, high frequency response data. Typically the installation procedure has involved either mounting...length scale such as blade chord) again matches the engine value. Also before the start of the run a choke valve downstream of the turbine is set to...of Engineering for Gas Turbines and Power (January 1984), Volume 106. p 229 - 240. Gibbings, J.C. “On boundary Layer Transition Wires.” Aeronautical

  19. Estimation of ion competition via correlated responsivity offset in linear ion trap mass spectrometry analysis: theory and practical use in the analysis of cyanobacterial hepatotoxin microcystin-LR in extracts of food additives.

    PubMed

    Urban, Jan; Hrouzek, Pavel; Stys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations.

  20. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  1. Estimation of Ion Competition via Correlated Responsivity Offset in Linear Ion Trap Mass Spectrometry Analysis: Theory and Practical Use in the Analysis of Cyanobacterial Hepatotoxin Microcystin-LR in Extracts of Food Additives

    PubMed Central

    Hrouzek, Pavel; Štys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations. PMID:23586036

  2. Highly accurate surface maps from profilometer measurements

    NASA Astrophysics Data System (ADS)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  3. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  4. Antibacterial activity of antibacterial cutting boards in household kitchens.

    PubMed

    Kounosu, Masayuki; Kaneko, Seiichi

    2007-12-01

    We examined antibacterial cutting boards with antibacterial activity values of either "2" or "4" in compliance with the JIS Z 2801 standard, and compared their findings with those of cutting boards with no antibacterial activity. These cutting boards were used in ten different households, and we measured changes in the viable cell counts of several types of bacteria with the drop plate method. We also identified the detected bacterial flora and measured the minimum antimicrobial concentrations of several commonly used antibacterial agents against the kinds of bacteria identified to determine the expected antibacterial activity of the respective agents. Cutting boards with activity values of both "2" and "4" proved to be antibacterial in actual use, although no correlation between the viable cell counts and the antibacterial activity values was observed. In the kitchen environment, large quantities of Pseudomonas, Flavobacterium, Micrococcus, and Bacillus were detected, and it was confirmed that common antibacterial agents used in many antibacterial products are effective against these bacterial species. In addition, we measured the minimum antimicrobial concentrations of the agents against lactobacillus, a typical good bacterium, and discovered that this bacterium is less sensitive to these antibacterial agents compared to more common bacteria.

  5. Boundary overlap for medical image segmentation evaluation

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina

    2017-03-01

    All medical image segmentation algorithms need to be validated and compared, and yet no evaluation framework is widely accepted within the imaging community. Collections of segmentation results often need to be compared and ranked by their effectiveness. Evaluation measures which are popular in the literature are based on region overlap or boundary distance. None of these are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, shape) but no single measure covers all error types. We introduce a new family of measures, with hybrid characteristics. These measures quantify similarity/difference of segmented regions by considering their overlap around the region boundaries. This family is more sensitive than other measures in the literature to combinations of segmentation error types. We compare measure performance on collections of segmentation results sourced from carefully compiled 2D synthetic data, and also on 3D medical image volumes. We show that our new measure: (1) penalises errors successfully, especially those around region boundaries; (2) gives a low similarity score when existing measures disagree, thus avoiding overly inflated scores; and (3) scores segmentation results over a wider range of values. We consider a representative measure from this family and the effect of its only free parameter on error sensitivity, typical value range, and running time.

  6. Scanning confocal slit photon counter measurements of post-PRK haze in two-year study

    NASA Astrophysics Data System (ADS)

    Taboada, John; Gaines, David; Perez, Mary A.; Waller, Steve G.; Ivan, Douglas J.; Baldwin, J. Bruce; LoRusso, Frank; Tutt, Ronald C.; Thompson, B.; Perez, Jose; Tredici, Thomas; Johnson, Dan A.

    2001-06-01

    In our study, a group of 80 United States Air Force, non- flying personnel will undergo photorefractive corneal surgery for moderate levels of myopia (< 6 diopters) and 20 will serve as controls. As of this report, approximately 56 have had the treatment. Of these, only about 59% of the treated eyes showed even a trace (.5) level of clinically assessed haze at any time. We report on the use of a recently developed instrument designed for the objective measurement of these low levels of haze in treated corneas. The sensitivity of the instrument is derived from the use of a scanning confocal slit photon counter. The use of a physical standard for calibration secures accuracy and reproducibility over an extensive period of time. Our haze measurements in this study revealed a very low level increase from baseline values for these patients. The typical increase over baseline was of the same magnitude as the variability in the observations, although the inherent variability in the measurements was approximately 0.25 times the value of the patient's haze variability.

  7. 3-Hz postural tremor in multiple system atrophy cerebellar type (MSA-C)-a static posturography study.

    PubMed

    Li, Xiaodi; Wang, Yuzhou; Wang, Zhanhang; Xu, Yan; Zheng, Wenhua

    2018-01-01

    The objective of the study is to evaluate postural dysfunction of multiple system atrophy-parkinsonian type (MSA-P) and cerebellar type (MSA-C) by static posturography exam. A total of 29 MSA-P patients, 40 MSA-C patients, and 23 healthy controls (HC) were recruited and engaged in a sensory organization test (SOT). The amplitude of the postural sway was measured and transformed into energy value by Fourier analyzer. SOT scores, frequency of falls and typical 3-Hz postural tremors during the four stance tasks, and energy value in three different frequency bands were recorded and compared. Compared with HC, SOT scores were significantly lower in MSA groups (P < 0.01). Compared with MSA-P, the vestibular scores were further reduced in MSA-C patients (P < 0.05). Falls were more frequent in MSA groups, especially in SOT4 task (foam surface with eyes closed) or in MSA-C group (P < 0.05). Typical 3-Hz postural tremor was observed in 97.5% MSA-C patients, in 24.1% MSA-P patients but in none of the HC (P < 0.05). Compared with HC, much more energy was consumed in every task, every direction, and nearly every frequency band in MSA groups. Energy value of MSA-C group was significantly higher than that of MSA-P, especially in higher frequency band (2 ~ 20 Hz) or in more difficult stance tasks (SOT 3 ~ 4, foam surface with eyes open or closed) (P < 0.05). Both MSA-P and MSA-C were characterized by severe static postural dysfunction. However, typical 3-Hz postural tremor was predominant in MSA-C and was very useful in the differential diagnosis between MSA-P and MSA-C.

  8. Evaluation of a Commercial Tractor Safety Monitoring System Using a Reverse Engineering Procedure.

    PubMed

    Casazza, Camilla; Martelli, Roberta; Rondelli, Valda

    2016-10-17

    There is a high rate of work-related deaths in agriculture. In Italy, despite the obligato-ry installation of ROPS, fatal accidents involving tractors represent more than 40% of work-related deaths in agriculture. As death is often due to an overturn that the driver is incapable of predicting, driver assistance devices that can signal critical stability conditions have been studied and marketed to prevent accidents. These devices measure the working parameters of the tractor through sensors and elaborate the values using an algorithm that, taking into account the geometric characteristics of the tractor, pro-vides a risk index based on models elaborated on a theoretical basis. This research aimed to verify one of these stability indexes in the field, using a commercial driver as-sistance device to monitor five tractors on the University of Bologna experimental farm. The setup of the device involved determining the coordinates of the center of gravity of the tractor and the implement mounted on the tractor. The analysis of the stability in-dex, limited to events with a significant risk level, revealed a clear separation into two groups: events with high values of roll or pitch and low speeds, typical of a tractor when working, and events with low values of roll and pitch and high steering angle and forward speed, typical of travel on the road. The equation for calculating the critical speed when turning provided a significant contribution only for events that were typi-cal of travel rather than field work, suggesting a diversified calculation approach ac-cording to the work phase. Copyright© by the American Society of Agricultural Engineers.

  9. Small Aircraft RF Interference Path Loss

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Szatkowski, George N.; Mielnik, John J.; Salud, Maria Theresa P.

    2007-01-01

    Interference to aircraft radio receivers is an increasing concern as more portable electronic devices are allowed onboard. Interference signals are attenuated as they propagate from inside the cabin to aircraft radio antennas mounted on the outside of the aircraft. The attenuation level is referred to as the interference path loss (IPL) value. Significant published IPL data exists for transport and regional category airplanes. This report fills a void by providing data for small business/corporate and general aviation aircraft. In this effort, IPL measurements are performed on ten small aircraft of different designs and manufacturers. Multiple radio systems are addressed. Along with the typical worst-case coupling values, statistical distributions are also reported that could lead to better interference risk assessment.

  10. Determination of carbonyl compounds generated from the E-cigarette using coupled silica cartridges impregnated with hydroquinone and 2,4-dinitrophenylhydrazine, followed by high-performance liquid chromatography.

    PubMed

    Uchiyama, Shigehisa; Ohta, Kazushi; Inaba, Yohei; Kunugita, Naoki

    2013-01-01

    Carbonyl compounds in E-cigarette smoke mist were measured using coupled silica cartridges impregnated with hydroquinone and 2,4-dinitrophenylhydrazine, followed by high-performance liquid chromatography. A total of 363 E-cigarettes (13 brands) were examined. Four of the 13 E-cigarette brands did not generate any carbonyl compounds, while the other nine E-cigarette brands generated various carbonyl compounds. However, the carbonyl concentrations of the E-cigarette products did not show typical distributions, and the mean values were largely different from the median values. It was elucidated that E-cigarettes incidentally generate high concentrations of carbonyl compounds.

  11. Dynamic characteristics of organic bulk-heterojunction solar cells

    NASA Astrophysics Data System (ADS)

    Babenko, S. D.; Balakai, A. A.; Moskvin, Yu. L.; Simbirtseva, G. V.; Troshin, P. A.

    2010-12-01

    Transient characteristics of organic bulk-heterojunction solar cells have been studied using pulsed laser probing. An analysis of the photoresponse waveforms of a typical solar cell measured by varying load resistance within broad range at different values of the bias voltage provided detailed information on the photocell parameters that characterize electron-transport properties of active layers. It is established that the charge carrier mobility is sufficient to ensure high values of the fill factor (˜0.6) in the obtained photocells. On approaching the no-load voltage, the differential capacitance of the photocell exhibits a sixfold increase as compared to the geometric capacitance. A possible mechanism of recombination losses in the active medium is proposed.

  12. Role of delay-based reward in the spatial cooperation

    NASA Astrophysics Data System (ADS)

    Wang, Xu-Wen; Nie, Sen; Jiang, Luo-Luo; Wang, Bing-Hong; Chen, Shi-Ming

    2017-01-01

    Strategy selection in games, a typical decision making, usually brings noticeable reward for players which have discounted value if the delay appears. The discounted value is measure: earning sooner with a small reward or later with a delayed larger reward. Here, we investigate effects of delayed rewards on the cooperation in structured population. It is found that delayed reward supports the spreading of cooperation in square lattice, small-world and random networks. In particular, intermediate reward differences between delays impel the highest cooperation level. Interestingly, cooperative individuals with the same delay time steps form clusters to resist the invasion of defects, and cooperative individuals with lowest delay reward survive because they form the largest clusters in the lattice.

  13. Air quality measurements in urban green areas - a case study

    NASA Astrophysics Data System (ADS)

    Kuttler, W.; Strassburger, A.

    The influence of traffic-induced pollutants (e.g. CO, NO, NO 2 and O 3) on the air quality of urban areas was investigated in the city of Essen, North Rhine-Westphalia (NRW), Germany. Twelve air hygiene profile measuring trips were made to analyse the trace gas distribution in the urban area with high spatial resolution and to compare the air hygiene situation of urban green areas with the overall situation of urban pollution. Seventeen measurements were made to determine the diurnal concentration courses within urban parks (summer conditions: 13 measurements, 530 30 min mean values, winter conditions: 4 measurements, 128 30 min mean values). The measurements were carried out during mainly calm wind and cloudless conditions between February 1995 and March 1996. It was possible to establish highly differentiated spatial concentration patterns within the urban area. These patterns were correlated with five general types of land use (motorway, main road, secondary road, residential area, green area) which were influenced to varying degrees by traffic emissions. Urban parks downwind from the main emission sources show the following typical temporal concentration courses: In summer rush-hour-dependent CO, NO and NO 2 maxima only occurred in the morning. A high NO 2/NO ratio was established during weather conditions with high global radiation intensities ( K>800 W m -2), which may result in a high O 3 formation potential. Some of the values measured found in one of the parks investigated (Gruga Park, Essen, area: 0.7 km 2), which were as high as 275 μg m -3 O 3 (30-min mean value) were significantly higher than the German air quality standard of 120 μg m -3 (30-min mean value, VDI Guideline 2310, 1996) which currently applies in Germany and about 20% above the maximum values measured on the same day by the network of the North Rhine-Westphalian State Environment Agency. In winter high CO and NO concentrations occur in the morning and during the afternoon rush-hour. The highest concentrations (CO=4.3 mg m -3, NO=368 μg m -3, 30-min mean values) coincide with the increase in the evening inversion. The maximum measured values for CO, NO and NO 2 do not, however, exceed the German air quality standards in winter and summer.

  14. Ultrasonic friction power during Al wire wedge-wedge bonding

    NASA Astrophysics Data System (ADS)

    Shah, A.; Gaul, H.; Schneider-Ramelow, M.; Reichl, H.; Mayer, M.; Zhou, Y.

    2009-07-01

    Al wire bonding, also called ultrasonic wedge-wedge bonding, is a microwelding process used extensively in the microelectronics industry for interconnections to integrated circuits. The bonding wire used is a 25μm diameter AlSi1 wire. A friction power model is used to derive the ultrasonic friction power during Al wire bonding. Auxiliary measurements include the current delivered to the ultrasonic transducer, the vibration amplitude of the bonding tool tip in free air, and the ultrasonic force acting on the bonding pad during the bond process. The ultrasonic force measurement is like a signature of the bond as it allows for a detailed insight into mechanisms during various phases of the process. It is measured using piezoresistive force microsensors integrated close to the Al bonding pad (Al-Al process) on a custom made test chip. A clear break-off in the force signal is observed, which is followed by a relatively constant force for a short duration. A large second harmonic content is observed, describing a nonsymmetric deviation of the signal wave form from the sinusoidal shape. This deviation might be due to the reduced geometrical symmetry of the wedge tool. For bonds made with typical process parameters, several characteristic values used in the friction power model are determined. The ultrasonic compliance of the bonding system is 2.66μm/N. A typical maximum value of the relative interfacial amplitude of ultrasonic friction is at least 222nm. The maximum interfacial friction power is at least 11.5mW, which is only about 4.8% of the total electrical power delivered to the ultrasonic generator.

  15. Extreme dissolved oxygen variability in urbanised tropical wetlands: The need for detailed monitoring to protect nursery ground values

    NASA Astrophysics Data System (ADS)

    Dubuc, Alexia; Waltham, Nathan; Malerba, Martino; Sheaves, Marcus

    2017-11-01

    Little is known about levels of dissolved oxygen fish are exposed to daily in typical urbanised tropical wetlands found along the Great Barrier Reef coastline. This study investigates diel dissolved oxygen (DO) dynamics in one of these typical urbanised wetlands, in tropical North Queensland, Australia. High frequency data loggers (DO, temperature, depth) were deployed for several days over the summer months in different tidal pools and channels that fish use as temporal or permanent refuges. DO was extremely variable over a 24 h cycle, and across the small-scale wetland. The high spatial and temporal DO variability measured was affected by time of day and tidal factors, namely water depth, tidal range and tidal direction (flood vs ebb). For the duration of the logging time, DO was mainly above the adopted threshold for hypoxia (50% saturation), however, for around 11% of the time, and on almost every logging day, DO values fell below the threshold, including a severe hypoxic event (<5% saturation) that continued for several hours. Fish still use this wetland intensively, so must be able to cope with low DO periods. Despite the ability of fish to tolerate extreme conditions, continuing urban expansion is likely to lead to further water quality degradation and so potential loss of nursery ground value. There is a substantial discontinuity between the recommended DO values in the Australian and New Zealand Guidelines for Fresh and Marine Water Quality and the values observed in this wetland, highlighting the limited value of these guidelines for management purposes. Local and regional high frequency data monitoring programs, in conjunction with local exposure risk studies are needed to underpin the development of the management that will ensure the sustainability of coastal wetlands.

  16. SU-E-T-161: Characterization and Validation of CT Simulator Hounsfield Units to Relative Stopping Power Values for Proton Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnell, E; Ahmad, S; De La Fuente Herman, T

    2015-06-15

    Purpose: To develop a calibration curve that includes and minimizes the variations of Hounsfield Unit (HU) from a CT scanner to Relative Stopping Power (RSP) of tissues along the proton beam path. The variations are due to scanner and proton energy, technique, phantom size and placement, and tissue arrangement. Methods: A CIRS 062 M phantom with 10 plugs of known relative electron density (RED) was scanned through a 16 slice GE Discovery CT Simulator scanner. Three setup combinations of plug distributions and techniques clinically implemented for five treatment regions were scanned with energies of 100, 120, and 140 kV. Volumetricmore » HU values were measured for each plug and scan. The RSP values derived through the Bethe-Bloch formula are currently being verified with parallel-plate ionization chamber measurements in water using 80, 150, and 225 MeV proton beam. Typical treatment plans for treatment regions of brain, head-&-neck, chest, abdomen, and pelvis are being planned and dose delivered will be compared with film and Optically Stimulated Luminescence (OSL) measurements. Results: Percentage variations were determined for each variable. For tissues close to water, variations were <1% from any given parameter. Tissues far from water equivalence (lung and bone) showed the greatest sensitivity to change (7.4% maximum) with scanner energy and up to 5.3% with positioning of the phantom. No major variations were observed for proton energies within the treatment range. Conclusion: When deriving a calibration curve, attention should be placed to low and high HU values. A thorough verification process of calculated vs. water-phantom measured RSP values at different proton energies, followed by dose validation of planned vs. measured doses in phantom with film and OSL detectors are currently being undertaken.« less

  17. Network analysis of surgical innovation: Measuring value and the virality of diffusion in robotic surgery.

    PubMed

    Garas, George; Cingolani, Isabella; Panzarasa, Pietro; Darzi, Ara; Athanasiou, Thanos

    2017-01-01

    Existing surgical innovation frameworks suffer from a unifying limitation, their qualitative nature. A rigorous approach to measuring surgical innovation is needed that extends beyond detecting simply publication, citation, and patent counts and instead uncovers an implementation-based value from the structure of the entire adoption cascades produced over time by diffusion processes. Based on the principles of evidence-based medicine and existing surgical regulatory frameworks, the surgical innovation funnel is described. This illustrates the different stages through which innovation in surgery typically progresses. The aim is to propose a novel and quantitative network-based framework that will permit modeling and visualizing innovation diffusion cascades in surgery and measuring virality and value of innovations. Network analysis of constructed citation networks of all articles concerned with robotic surgery (n = 13,240, Scopus®) was performed (1974-2014). The virality of each cascade was measured as was innovation value (measured by the innovation index) derived from the evidence-based stage occupied by the corresponding seed article in the surgical innovation funnel. The network-based surgical innovation metrics were also validated against real world big data (National Inpatient Sample-NIS®). Rankings of surgical innovation across specialties by cascade size and structural virality (structural depth and width) were found to correlate closely with the ranking by innovation value (Spearman's rank correlation coefficient = 0.758 (p = 0.01), 0.782 (p = 0.008), 0.624 (p = 0.05), respectively) which in turn matches the ranking based on real world big data from the NIS® (Spearman's coefficient = 0.673;p = 0.033). Network analysis offers unique new opportunities for understanding, modeling and measuring surgical innovation, and ultimately for assessing and comparing generative value between different specialties. The novel surgical innovation metrics developed may prove valuable especially in guiding policy makers, funding bodies, surgeons, and healthcare providers in the current climate of competing national priorities for investment.

  18. Network analysis of surgical innovation: Measuring value and the virality of diffusion in robotic surgery

    PubMed Central

    Cingolani, Isabella; Panzarasa, Pietro; Darzi, Ara; Athanasiou, Thanos

    2017-01-01

    Background Existing surgical innovation frameworks suffer from a unifying limitation, their qualitative nature. A rigorous approach to measuring surgical innovation is needed that extends beyond detecting simply publication, citation, and patent counts and instead uncovers an implementation-based value from the structure of the entire adoption cascades produced over time by diffusion processes. Based on the principles of evidence-based medicine and existing surgical regulatory frameworks, the surgical innovation funnel is described. This illustrates the different stages through which innovation in surgery typically progresses. The aim is to propose a novel and quantitative network-based framework that will permit modeling and visualizing innovation diffusion cascades in surgery and measuring virality and value of innovations. Materials and methods Network analysis of constructed citation networks of all articles concerned with robotic surgery (n = 13,240, Scopus®) was performed (1974–2014). The virality of each cascade was measured as was innovation value (measured by the innovation index) derived from the evidence-based stage occupied by the corresponding seed article in the surgical innovation funnel. The network-based surgical innovation metrics were also validated against real world big data (National Inpatient Sample–NIS®). Results Rankings of surgical innovation across specialties by cascade size and structural virality (structural depth and width) were found to correlate closely with the ranking by innovation value (Spearman’s rank correlation coefficient = 0.758 (p = 0.01), 0.782 (p = 0.008), 0.624 (p = 0.05), respectively) which in turn matches the ranking based on real world big data from the NIS® (Spearman’s coefficient = 0.673;p = 0.033). Conclusion Network analysis offers unique new opportunities for understanding, modeling and measuring surgical innovation, and ultimately for assessing and comparing generative value between different specialties. The novel surgical innovation metrics developed may prove valuable especially in guiding policy makers, funding bodies, surgeons, and healthcare providers in the current climate of competing national priorities for investment. PMID:28841648

  19. Implications of observed inconsistencies in carbonate chemistry measurements for ocean acidification studies

    NASA Astrophysics Data System (ADS)

    Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.

    2012-07-01

    The growing field of ocean acidification research is concerned with the investigation of organism responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small-scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30% lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.

  20. Implications of observed inconsistencies in carbonate chemistry measurements for ocean acidification studies

    NASA Astrophysics Data System (ADS)

    Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.

    2012-02-01

    The growing field of ocean acidification research is concerned with the investigation of organisms' responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30 % lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.

  1. Using a 'value-added' approach for contextual design of geographic information.

    PubMed

    May, Andrew J

    2013-11-01

    The aim of this article is to demonstrate how a 'value-added' approach can be used for user-centred design of geographic information. An information science perspective was used, with value being the difference in outcomes arising from alternative information sets. Sixteen drivers navigated a complex, unfamiliar urban route, using visual and verbal instructions representing the distance-to-turn and junction layout information presented by typical satellite navigation systems. Data measuring driving errors, navigation errors and driver confidence were collected throughout the trial. The results show how driver performance varied considerably according to the geographic context at specific locations, and that there are specific opportunities to add value with enhanced geographical information. The conclusions are that a value-added approach facilitates a more explicit focus on 'desired' (and feasible) levels of end user performance with different information sets, and is a potentially effective approach to user-centred design of geographic information. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. Performance evaluation of image-intensifier-TV fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    van der Putten, Wilhelm J.; Bouley, Shawn

    1995-05-01

    Through use of a computer model and an aluminum low contrast phantom developed in-house, a method has been developed which is able to grade the imaging performance of fluoroscopy systems through use of a variable, K. This parameter was derived from Rose's model of image perception and is here used as a figure of merit to grade fluoroscopy systems. From Rose's model for an ideal system, a typical value of K for the perception of low contrast details should be between 3 and 7, assuming threshold vision by human observers. Thus, various fluoroscopy systems are graded with different values of K, with a lower value of K indicating better imaging performance of the system. A series of fluoroscopy systems have been graded where the best system produces a value in the low teens, while the poorest systems produce a value in the low twenties. Correlation with conventional image quality measurements is good and the method has the potential for automated assessment of image quality.

  3. Characterization of Water Quality in Unmonitored Streams in the Mississippi Alluvial Plain, Northwestern Mississippi, May-June 2006

    USGS Publications Warehouse

    Bryson, Jeannie R.; Coupe, Richard H.; Manning, Michael A.

    2007-01-01

    The Mississippi Department of Environmental Quality is required to develop restoration and remediation plans for water bodies not meeting their designated uses, as stated in the U.S. Environmental Protection Agency's Clean Water Act section 303(d). The majority of streams in northwestern Mississippi are on the 303(d) list of water-quality limited waters. Agricultural effects on streams in northwestern Mississippi have reduced the number of unimpaired streams (reference streams) for water-quality comparisons. As part of an effort to develop an index to assess impairment, the U.S. Geological Survey collected water samples from 52 stream sites on the 303(d) list during May-June 2006, and analyzed the samples for nutrients and chlorophyll. The data were analyzed by trophic group as determined by total nitrogen concentrations. Seven constituents (nitrite plus nitrate, total Kjeldhal nitrogen, total phosphorus, orthophosphorus, total organic carbon, chlorophyll a, and pheophytina) and four physical property measurements (specific conductance, pH, turbidity, and dissolved oxygen) were determined to be significantly different (p < 0.05) between trophic groups. Total Kjeldhal nitrogen, turbidity, and dissolved oxygen were used as indicators of stream productivity with which to infer stream health. Streams having high total Kjeldhal nitrogen values and high turbidity values along with low dissolved oxygen concentrations were typically eutrophic abundant in nutrients), whereas streams having low total Kjeldhal nitrogen values and low turbidity values along with high dissolved oxygen concentrations were typically oligotrophic (deficient in nutrients).

  4. Estimation of proliferative potentiality of central neurocytoma: correlational analysis of minimum ADC and maximum SUV with MIB-1 labeling index.

    PubMed

    Sakamoto, Ryo; Okada, Tomohisa; Kanagaki, Mitsunori; Yamamoto, Akira; Fushimi, Yasutaka; Kakigi, Takahide; Arakawa, Yoshiki; Takahashi, Jun C; Mikami, Yoshiki; Togashi, Kaori

    2015-01-01

    Central neurocytoma was initially believed to be benign tumor type, although atypical cases with more aggressive behavior have been reported. Preoperative estimation for proliferating activity of central neurocytoma is one of the most important considerations for determining tumor management. To investigate predictive values of image characteristics and quantitative measurements of minimum apparent diffusion coefficient (ADCmin) and maximum standardized uptake value (SUVmax) for proliferative activity of central neurocytoma measured by MIB-1 labeling index (LI). Twelve cases of central neurocytoma including one recurrence from January 2001 to December 2011 were included. Preoperative scans were conducted in 11, nine, and five patients for computed tomography (CT), diffusion-weighted imaging (DWI), and fluorine-18-fluorodeoxyglucose positron emission tomography (FDG-PET), respectively, and ADCmin and SUVmax of the tumors were measured. Image characteristics were investigated using CT, T2-weighted (T2W) imaging and contrast-enhanced T1-weighted (T1W) imaging, and their differences were examined using the Fisher's exact test between cases with MIB-1 LI below and above 2%, which is recognized as typical and atypical central neurocytoma, respectively. Correlational analysis was conducted for ADCmin and SUVmax with MIB-1 LI. A P value <0.05 was considered significant. Morphological appearances had large variety, and there was no significant correlation with MIB-1 LI except a tendency that strong enhancement was observed in central neurocytomas with higher MIB-1 LI (P = 0.061). High linearity with MIB-1 LI was observed in ADCmin and SUVmax (r = -0.91 and 0.74, respectively), but only ADCmin was statistically significant (P = 0.0006). Central neurocytoma had a wide variety of image appearance, and assessment of proliferative potential was considered difficult only by morphological aspects. ADCmin was recognized as a potential marker for differentiation of atypical central neurocytomas from the typical ones. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  5. In Search of Multi-Peaked Reflective Spectrum with Optic Fiber Bragg Grating Sensor for Dynamic Strain Measurement

    NASA Technical Reports Server (NTRS)

    Tai, Hsiang

    2006-01-01

    In a typical optic fiber Bragg grating (FBG) strain measurement, unless in an ideal static laboratory environment, the presence of vibration or often disturbance always exists, which often creates spurious multiple peaks in the reflected spectrum, resulting in a non-unique determination of strain value. In this report we attempt to investigate the origin of this phenomenon by physical arguments and simple numerical simulation. We postulate that the fiber gratings execute small amplitude transverse vibrations changing the optical path in which the reflected light traverses slightly and non-uniformly. Ultimately, this causes the multi-peak reflected spectrum.

  6. Survey of A asymmetries in semi-exclusive electron scattering on 4He and 12C

    NASA Astrophysics Data System (ADS)

    Protopopescu, D.; Hersman, F. W.; Holtrop, M.; Adams, G.; Ambrozewicz, P.; Anciant, E.; Anghinolfi, M.; Asavapibhop, B.; Asryan, G.; Audit, G.; Auger, T.; Avakian, H.; Bagdasaryan, H.; Ball, J. P.; Barrow, S.; Battaglieri, M.; Beard, K.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Berman, B. L.; Bertozzi, W.; Bianchi, N.; Biselli, A. S.; Boiarinov, S.; Bonner, B. E.; Bouchigny, S.; Bradford, R.; Branford, D.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Butuceanu, C.; Calarco, J. R.; Carman, D. S.; Carnahan, B.; Cetina, C.; Chen, S.; Cole, P. L.; Coleman, A.; Cords, D.; Corvisiero, P.; Crabb, D.; Crannell, H.; Cummings, J. P.; Debruyne, D.; De Sanctis, E.; DeVita, R.; Degtyarenko, P. V.; Dennis, L.; Dharmawardane, K. V.; Dhuga, K. S.; Djalali, C.; Dodge, G. E.; Doughty, D.; Dragovitsch, P.; Dugger, M.; Dytman, S.; Dzyubak, O. P.; Egiyan, H.; Egiyan, K. S.; Elouadrhiri, L.; Empl, A.; Eugenio, P.; Fatemi, R.; Feuerbach, R. J.; Forest, T. A.; Funsten, H.; Gavalian, G.; Gilad, S.; Gilfoyle, G. P.; Giovanetti, K. L.; Girard, P.; Gordon, C. I. O.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hakobyan, R. S.; Hardie, J.; Heddle, D.; Hicks, K.; Hleiqawi, I.; Hu, J.; Hyde-Wright, C. E.; Ingram, W.; Ireland, D.; Ito, M. M.; Jenkins, D.; Joo, K.; Juengst, H. G.; Kelley, J. H.; Kellie, J. D.; Khandaker, M.; Kim, K. Y.; Kim, K.; Kim, W.; Klein, A.; Klein, F. J.; Klimenko, A. V.; Klusman, M.; Kossov, M.; Kramer, L. H.; Kuhn, S. E.; Kuhn, J.; Lachniet, J.; Laget, J. M.; Langheinrich, J.; Lawrence, D.; Lee, T.; Li, Ji; Livingston, K.; Lukashin, K.; Manak, J. J.; Marchand, C.; McAleer, S.; McLauchlan, S. T.; McNabb, J. W. C.; Mecking, B. A.; Melone, J. J.; Mestayer, M. D.; Meyer, C. A.; Mikhailov, K.; Minehart, R.; Mirazita, M.; Miskimen, R.; Morand, L.; Morrow, S. A.; Muccifora, V.; Mueller, J.; Mutchler, G. S.; Napolitano, J.; Nasseripour, R.; Nelson, S. O.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niyazov, R. A.; Nozar, M.; O'Rielly, G. V.; Osipenko, M.; Ostrovidov, A.; Park, K.; Pasyuk, E.; Peterson, G.; Philips, S. A.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Prok, Y.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Ritchie, B. G.; Ronchetti, F.; Rosner, G.; Rossi, P.; Rowntree, D.; Rubin, P. D.; Ryckebusch, J.; Sabatié, F.; Sabourov, K.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Schumacher, R. A.; Serov, V. S.; Sharabian, Y. G.; Shaw, J.; Simionatto, S.; Skabelin, A. V.; Smith, E. S.; Smith, L. C.; Sober, D. I.; Spraker, M.; Stavinsky, A.; Stepanyan, S.; Stokes, B. E.; Stoler, P.; Strauch, S.; Taiuti, M.; Taylor, S.; Tedeschi, D. J.; Thoma, U.; Thompson, R.; Tkabladze, A.; Todor, L.; Tur, C.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Wang, K.; Weinstein, L. B.; Weller, H.; Weygand, D. P.; Whisnant, C. S.; Williams, M.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yun, J.; Zana, L.; Zhang, B.; CLAS Collaboration

    2005-02-01

    Single spin azimuthal asymmetries A were measured at Jefferson Lab using 2.2 and 4.4 GeV longitudinally polarised electrons incident on 4He and 12C targets in the CLAS detector. A is related to the imaginary part of the longitudinal-transverse interference and in quasifree nucleon knockout it provides an unambiguous signature for final state interactions (FSI). Experimental values of A were found to be below 5%, typically |A|⩽3% for data with good statistical precision. Optical model in eikonal approximation (OMEA) and relativistic multiple-scattering Glauber approximation (RMSGA) calculations are shown to be consistent with the measured asymmetries.

  7. Optical emission of directly contacted copper/sapphire interface under shock compression of megabar

    NASA Astrophysics Data System (ADS)

    Hao, G. Y.; Liu, F. S.; Zhang, D. Y.; Zhang, M. J.

    2007-06-01

    The shock-induced optical emission histories from copper/sapphire interface were measured under two different contact conditions, which simulated the typical situations of pyrometry experiments. Results showed that the "peak" feature of the radiation, previously interpreted as the appearance of so-called high-temperature layer, was nearly diminished by finely polishing and uniformly prepressing technique, and that it is possible to directly measure the equilibrium temperature of bulk metal/window interface. Study also demonstrated that the saturated value of the apparent temperature in nonideal contact situation is related to the color temperature of the shock-induced "bright spot" in sapphire window under megabar pressures.

  8. Shear waves in vegetal tissues at ultrasonic frequencies

    NASA Astrophysics Data System (ADS)

    Fariñas, M. D.; Sancho-Knapik, D.; Peguero-Pina, J. J.; Gil-Pelegrín, E.; Gómez Álvarez-Arenas, T. E.

    2013-03-01

    Shear waves are investigated in leaves of two plant species using air-coupled ultrasound. Magnitude and phase spectra of the transmission coefficient around the first two orders of the thickness resonances (normal and oblique incidence) have been measured. A bilayer acoustic model for plant leaves (comprising the palisade parenchyma and the spongy mesophyll) is proposed to extract, from measured spectra, properties of these tissues like: velocity and attenuation of longitudinal and shear waves and hence Young modulus, rigidity modulus, and Poisson's ratio. Elastic moduli values are typical of cellular solids and both, shear and longitudinal waves exhibit classical viscoelastic losses. Influence of leaf water content is also analyzed.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, E. Frederick; Zarick, Thomas Andrew; Sheridan, Timothy J.

    We performed measurements and analyses of the prompt radiation-induced conductivity (RIC) in thin samples of polyurethane foam and glass microballoon foam at the Little Mountain Medusa LINAC facility in Ogden, UT. The RIC coefficient was non-linear with dose rate for polyurethane foam; however, typical values at 1E11 rad(si)/s dose rate was measured as 0.8E-11 mho/m/rad/s for 5 lb./cu ft. foam and 0.3E-11 mho/m/rad/s for 10 lb./cu ft. density polyurethane foam. For encapsulated glass microballoons (GMB) the RIC coefficient was approximately 1E-15 mho/m/rad/s and was not a strong function of dose rate.

  10. Diagnosing pure-electron plasmas with internal particle flux probes.

    PubMed

    Kremer, J P; Pedersen, T Sunn; Marksteiner, Q; Lefrancois, R G; Hahn, M

    2007-01-01

    Techniques for measuring local plasma potential, density, and temperature of pure-electron plasmas using emissive and Langmuir probes are described. The plasma potential is measured as the least negative potential at which a hot tungsten filament emits electrons. Temperature is measured, as is commonly done in quasineutral plasmas, through the interpretation of a Langmuir probe current-voltage characteristic. Due to the lack of ion-saturation current, the density must also be measured through the interpretation of this characteristic thereby greatly complicating the measurement. Measurements are further complicated by low densities, low cross field transport rates, and large flows typical of pure-electron plasmas. This article describes the use of these techniques on pure-electron plasmas in the Columbia Non-neutral Torus (CNT) stellarator. Measured values for present baseline experimental parameters in CNT are phi(p)=-200+/-2 V, T(e)=4+/-1 eV, and n(e) on the order of 10(12) m(-3) in the interior.

  11. FAIR exempting separate T (1) measurement (FAIREST): a novel technique for online quantitative perfusion imaging and multi-contrast fMRI.

    PubMed

    Lai, S; Wang, J; Jahng, G H

    2001-01-01

    A new pulse sequence, dubbed FAIR exempting separate T(1) measurement (FAIREST) in which a slice-selective saturation recovery acquisition is added in addition to the standard FAIR (flow-sensitive alternating inversion recovery) scheme, was developed for quantitative perfusion imaging and multi-contrast fMRI. The technique allows for clean separation between and thus simultaneous assessment of BOLD and perfusion effects, whereas quantitative cerebral blood flow (CBF) and tissue T(1) values are monitored online. Online CBF maps were obtained using the FAIREST technique and the measured CBF values were consistent with the off-line CBF maps obtained from using the FAIR technique in combination with a separate sequence for T(1) measurement. Finger tapping activation studies were carried out to demonstrate the applicability of the FAIREST technique in a typical fMRI setting for multi-contrast fMRI. The relative CBF and BOLD changes induced by finger-tapping were 75.1 +/- 18.3 and 1.8 +/- 0.4%, respectively, and the relative oxygen consumption rate change was 2.5 +/- 7.7%. The results from correlation of the T(1) maps with the activation images on a pixel-by-pixel basis show that the mean T(1) value of the CBF activation pixels is close to the T(1) of gray matter while the mean T(1) value of the BOLD activation pixels is close to the T(1) range of blood and cerebrospinal fluid. Copyright 2001 John Wiley & Sons, Ltd.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  13. A simple method for the measurement of reflective foil emissivity

    NASA Astrophysics Data System (ADS)

    Ballico, M. J.; van der Ham, E. W. M.

    2013-09-01

    Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to "bubble-wrap". Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity, and the standard ASTM-E408 is implied. Unfortunately this standard is not a "primary method" and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408.

  14. A simple method for the measurement of reflective foil emissivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballico, M. J.; Ham, E. W. M. van der

    Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to 'bubble-wrap'. Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity,more » and the standard ASTM-E408 is implied. Unfortunately this standard is not a 'primary method' and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408.« less

  15. The origin of blue-green window and the propagation of radiation in ocean waters

    NASA Astrophysics Data System (ADS)

    Reghunath, A. T.; Venkataramanan, V.; Suviseshamuthu, D. Victor; Krishnamohan, R.; Prasad, B. Raghavendra

    1991-01-01

    A review of the present knowledge about the origin of blue-green window in the attenuation spectrum of ocean waters is presented. The various physical mechanisms which contribute to the formation of the window are dealt separately and discussed. The typical values of attenuation coefficient arising out of the various processes are compiled to obtain the total beam attenuation coefficient. These values are then compared with measured values of attenuation coefficient for ocean waters collected from Arabian sea and Bay of Bengal. The region of minimum attenuation in pure particle-free sea water is found to be at 450 to 500 nm. It is shown that in the presence of suspended particles and chlorophyll, the window shifts to longer wavelength side. Some suggestions for future work in this area are also given in the concluding section.

  16. Light backscattering efficiency and related properties of some phytoplankters

    NASA Astrophysics Data System (ADS)

    Ahn, Yu-Hwan; Bricaud, Annick; Morel, André

    1992-11-01

    By using a set-up that combines an integrating sphere with a spectroradiometer LI-1800 UW, the backscattering properties of nine different phytoplankters grown in culture have been determined experimentally for the wavelengths domain ν = 400 up to 850 nm. Simultaneously, the absorption and attenuation properties, as well as the size distribution function, have been measured. This set of measurements allowed the spectral values of refractive index, and subsequently the volume scattering functions (VSF) of the cells, to be derived, by operating a scattering model previously developed for spherical and homogeneous cells. The backscattering properties, measured within a restricted angular domain (approximately between 132 and 174°), have been compared to theoretical predictions. Although there appear some discrepancies between experimental and predicted values (probably due to experimental errors as well as deviations of actual cells from computational hypotheses), the overall agreement is good; in particular the observed interspecific variations of backscattering values, as well as the backscattering spectral variation typical of each species, are well accounted for by theory. Using the computed VSF, the measured backscattering properties can be converted (assuming spherical and homogeneous cells) into efficiency factors for backscattering ( overlineQbb) . Thhe spectral behavior of overlineQbb appears to be radically different from that for total scattering overlineQb. For small cells, overlineQ (λ) is practically constant over the spectrum, whereas overlineQb(λ) varies approximately according to a power law (λ -2). As the cell size increases, overlineQbb conversely, becomes increasingly featured, whilst overlineQb becomes spectrally flat. The chlorophyll-specific backscattering coefficients ( b b∗ appear highly variable and span nearly two orders of magnitude. The chlorophyll-specific absorption and scattering coefficients, a ∗ and b ∗, are mainly ruled by the interspecific variations in cellssize ( D) and intracellular pigment concentration ( Ci) (actually by the variations of the product DCi). Though b b∗ is involved in the modelling of the diffuse reflectance of waters, the impact of its actual variation is greatly limited because typical b b∗ values, even at their maximum (10 -3 m 2 mg -1), are very low. This result confirms that living algae have a negligible influence on the backscattering process by oceanic waters; other particles (bacteria, detritus, etc.) associated with algae are mainly responsible for this process.

  17. Data collection handbook to support modeling the impacts of radioactive material in soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; Cheng, J.J.; Jones, L.G.

    1993-04-01

    A pathway analysis computer code called RESRAD has been developed for implementing US Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), and material-related (soil, concrete) parameters are used in the RESRAD code. This handbook discusses parameter definitions, typical ranges, variations, measurement methodologies, and input screen locations. Although this handbook was developed primarily to support the application of RESRAD, the discussions and values are valid for other model applications.

  18. Production of a large, quiescent, magnetized plasma

    NASA Technical Reports Server (NTRS)

    Landt, D. L.; Ajmera, R. C.

    1976-01-01

    An experimental device is described which produces a large homogeneous quiescent magnetized plasma. In this device, the plasma is created in an evacuated brass cylinder by ionizing collisions between electrons emitted from a large-diameter electron gun and argon atoms in the chamber. Typical experimentally measured values of the electron temperature and density are presented which were obtained with a glass-insulated planar Langmuir probe. It is noted that the present device facilitates the study of phenomena such as waves and diffusion in magnetized plasmas.

  19. Indoor radon survey in Visegrad countries.

    PubMed

    Műllerová, Monika; Kozak, Krzysztof; Kovács, Tibor; Smetanová, Iveta; Csordás, Anita; Grzadziel, Dominik; Holý, Karol; Mazur, Jadwiga; Moravcsík, Attila; Neznal, Martin; Neznal, Matej

    2016-04-01

    The indoor radon measurements were carried out in 123 residential buildings and 33 schools in Visegrad countries (Slovakia, Hungary and Poland). In 13.2% of rooms radon concentration exceeded 300Bqm(-3), the reference value recommended in the Council Directive 2013/59/EURATOM. Indoor radon in houses shows the typical radon behavior, with a minimum in the summer and a maximum in the winter season, whereas in 32% of schools the maximum indoor radon was reached in the summer months. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. The NASA Experience in Aeronautical R&D: Three Case Studies with Analysis

    DTIC Science & Technology

    1989-03-01

    84 A noise exposure calculation typically starts with flyover values fur a given giound station (measured in EPNdB), adds a penalty of 10 dB for events...ft sideline station for a 150,000-pound aircraft. 1 0 4 Extrapolating this to the standard FAR-36 sideline station implies 82.5 EPNdB. QCGAT. In 1975...34 (AIM) developed at Lewis and tested in their Plumbrook hypersonic test facility. The SAM was regeneratively cooled with liquid hydrogen and was tested

  1. International Workshop on Millimeter Waves (1992) Held in Orvieto, Italy on April 22-24, 1992

    DTIC Science & Technology

    1992-04-24

    1fillinmler4Vtlatri Circuits T. Yotwymatiat. Receott IDeehotnns. #of NRI)-;,.,s. Te, impIig It. 11. Jamset. Adv’anced Design Tee hnu jues fir Leiear... designed for the sky radiation measurement. - consideration of typical flight altitudes of 300m to Its output delivers a mean-value of the relevant...Electrolytic Processes: Anodic, Etching and Cathodic -increase of Surface to Volume Ratio metal deponition. Loca-Stuctre epoitin b pont- ikea ) no damage

  2. Comparison of non-invasive MRI measurements of cerebral blood flow in a large multisite cohort.

    PubMed

    Dolui, Sudipto; Wang, Ze; Wang, Danny Jj; Mattay, Raghav; Finkel, Mack; Elliott, Mark; Desiderio, Lisa; Inglis, Ben; Mueller, Bryon; Stafford, Randall B; Launer, Lenore J; Jacobs, David R; Bryan, R Nick; Detre, John A

    2016-07-01

    Arterial spin labeling and phase contrast magnetic resonance imaging provide independent non-invasive methods for measuring cerebral blood flow. We compared global cerebral blood flow measurements obtained using pseudo-continuous arterial spin labeling and phase contrast in 436 middle-aged subjects acquired at two sites in the NHLBI CARDIA multisite study. Cerebral blood flow measured by phase contrast (CBFPC: 55.76 ± 12.05 ml/100 g/min) was systematically higher (p < 0.001) and more variable than cerebral blood flow measured by pseudo-continuous arterial spin labeling (CBFPCASL: 47.70 ± 9.75). The correlation between global cerebral blood flow values obtained from the two modalities was 0.59 (p < 0.001), explaining less than half of the observed variance in cerebral blood flow estimates. Well-established correlations of global cerebral blood flow with age and sex were similarly observed in both CBFPCASL and CBFPC CBFPC also demonstrated statistically significant site differences, whereas no such differences were observed in CBFPCASL No consistent velocity-dependent effects on pseudo-continuous arterial spin labeling were observed, suggesting that pseudo-continuous labeling efficiency does not vary substantially across typical adult carotid and vertebral velocities, as has previously been suggested. Although CBFPCASL and CBFPC values show substantial similarity across the entire cohort, these data do not support calibration of CBFPCASL using CBFPC in individual subjects. The wide-ranging cerebral blood flow values obtained by both methods suggest that cerebral blood flow values are highly variable in the general population. © The Author(s) 2016.

  3. Activation energy of extracellular enzymes in soils from different biomes.

    PubMed

    Steinweg, J Megan; Jagadamma, Sindhu; Frerichs, Joshua; Mayes, Melanie A

    2013-01-01

    Enzyme dynamics are being incorporated into soil carbon cycling models and accurate representation of enzyme kinetics is an important step in predicting belowground nutrient dynamics. A scarce number of studies have measured activation energy (Ea) in soils and fewer studies have measured Ea in arctic and tropical soils, or in subsurface soils. We determined the Ea for four typical lignocellulose degrading enzymes in the A and B horizons of seven soils covering six different soil orders. We also elucidated which soil properties predicted any measurable differences in Ea. β-glucosidase, cellobiohydrolase, phenol oxidase and peroxidase activities were measured at five temperatures, 4, 21, 30, 40, and 60°C. Ea was calculated using the Arrhenius equation. β-glucosidase and cellobiohydrolase Ea values for both A and B horizons in this study were similar to previously reported values, however we could not make a direct comparison for B horizon soils because of the lack of data. There was no consistent relationship between hydrolase enzyme Ea and the environmental variables we measured. Phenol oxidase was the only enzyme that had a consistent positive relationship between Ea and pH in both horizons. The Ea in the arctic and subarctic zones for peroxidase was lower than the hydrolases and phenol oxidase values, indicating peroxidase may be a rate limited enzyme in environments under warming conditions. By including these six soil types we have increased the number of soil oxidative enzyme Ea values reported in the literature by 50%. This study is a step towards better quantifying enzyme kinetics in different climate zones.

  4. Water quality characterization and mathematical modeling of dissolved oxygen in the East and West Ponds, Jamaica Bay Wildlife Refuge.

    PubMed

    Maillacheruvu, Krishnanand; Roy, D; Tanacredi, J

    2003-09-01

    The current study was undertaken to characterize the East and West Ponds and develop a mathematical model of the effects of nutrient and BOD loading on dissolved oxygen (DO) concentrations in these ponds. The model predicted that both ponds will recover adequately given the average expected range of nutrient and BOD loading due to waste from surface runoff and migratory birds. The predicted dissolved oxygen levels in both ponds were greater than 5.0 mg/L, and were supported by DO levels in the field which were typically above 5.0 mg/L during the period of this study. The model predicted a steady-state NBOD concentration of 12.0-14.0 mg/L in the East Pond, compared to an average measured value of 3.73 mg/L in 1994 and an average measured value of 12.51 mg/L in a 1996-97 study. The model predicted that the NBOD concentration in the West Pond would be under 3.0 mg/L compared to the average measured values of 7.50 mg/L in 1997, and 8.51 mg/L in 1994. The model predicted that phosphorus (as PO4(3-)) concentration in the East Pond will approach 4.2 mg/L in 4 months, compared to measured average value of 2.01 mg/L in a 1994 study. The model predicted that phosphorus concentration in the West Pond will approach 1.00 mg/L, compared to a measured average phosphorus (as PO4(3-)) concentration of 1.57 mg/L in a 1994 study.

  5. Physical Activity in Vietnam: Estimates and Measurement Issues.

    PubMed

    Bui, Tan Van; Blizzard, Christopher Leigh; Luong, Khue Ngoc; Truong, Ngoc Le Van; Tran, Bao Quoc; Otahal, Petr; Srikanth, Velandai; Nelson, Mark Raymond; Au, Thuy Bich; Ha, Son Thai; Phung, Hai Ngoc; Tran, Mai Hoang; Callisaya, Michele; Gall, Seana

    2015-01-01

    Our aims were to provide the first national estimates of physical activity (PA) for Vietnam, and to investigate issues affecting their accuracy. Measurements were made using the Global Physical Activity Questionnaire (GPAQ) on a nationally-representative sample of 14706 participants (46.5% males, response 64.1%) aged 25-64 years selected by multi-stage stratified cluster sampling. Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men) and 69.1% (women) met WHO recommendations for PA by adults for their age. On average, 52.0 (men) and 28.0 (women) Metabolic Equivalent Task (MET)-hours/week (largely from work activities) were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added) was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes. Around seven-in-ten Vietnamese people aged 25-64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses.

  6. Relaxation and turbulence effects on sonic boom signatures

    NASA Technical Reports Server (NTRS)

    Pierce, Allan D.; Sparrow, Victor W.

    1992-01-01

    The rudimentary theory of sonic booms predicts that the pressure signatures received at the ground begin with an abrupt shock, such that the overpressure is nearly abrupt. This discontinuity actually has some structure, and a finite time is required for the waveform to reach its peak value. This portion of the waveform is here termed the rise phase, and it is with this portion that this presentation is primarily concerned. Any time characterizing the duration of the rise phase is loosely called the 'rise time.' Various definitions are used in the literature for this rise time. In the present discussion the rise time can be taken as the time for the waveform to rise from 10 percent of its peak value to 90 percent of its peak value. The available data on sonic booms that appears in the open literature suggests that typical values of shock over-pressure lie in the range of 30 Pa to 200 Pa, typical values of shock duration lie in the range of 150 ms to 250 ms, and typical values of the rise time lie in the range of 1 ms to 5 ms. The understanding of the rise phase of sonic booms is important because the perceived loudness of a shock depends primarily on the structure of the rise phase. A longer rise time typically implies a less loud shock. A primary question is just what physical mechanisms are most important for the determination of the detailed structure of the rise phase.

  7. A New Electromagnetic Instrument for Thickness Gauging of Conductive Materials

    NASA Technical Reports Server (NTRS)

    Fulton, J. P.; Wincheski, B.; Nath, S.; Reilly, J.; Namkung, M.

    1994-01-01

    Eddy current techniques are widely used to measure the thickness of electrically conducting materials. The approach, however, requires an extensive set of calibration standards and can be quite time consuming to set up and perform. Recently, an electromagnetic sensor was developed which eliminates the need for impedance measurements. The ability to monitor the magnitude of a voltage output independent of the phase enables the use of extremely simple instrumentation. Using this new sensor a portable hand-held instrument was developed. The device makes single point measurements of the thickness of nonferromagnetic conductive materials. The technique utilized by this instrument requires calibration with two samples of known thicknesses that are representative of the upper and lower thickness values to be measured. The accuracy of the instrument depends upon the calibration range, with a larger range giving a larger error. The measured thicknesses are typically within 2-3% of the calibration range (the difference between the thin and thick sample) of their actual values. In this paper the design, operational and performance characteristics of the instrument along with a detailed description of the thickness gauging algorithm used in the device are presented.

  8. Team knowledge representation: a network perspective.

    PubMed

    Espinosa, J Alberto; Clark, Mark A

    2014-03-01

    We propose a network perspective of team knowledge that offers both conceptual and methodological advantages, expanding explanatory value through representation and measurement of component structure and content. Team knowledge has typically been conceptualized and measured with relatively simple aggregates, without fully accounting for differing knowledge configurations among team members. Teams with similar aggregate values of team knowledge may have very different team dynamics depending on how knowledge isolates, cliques, and densities are distributed across the team; which members are the most knowledgeable; who shares knowledge with whom; and how knowledge clusters are distributed. We illustrate our proposed network approach through a sample of 57 teams, including how to compute, analyze, and visually represent team knowledge. Team knowledge network structures (isolation, centrality) are associated with outcomes of, respectively, task coordination, strategy coordination, and the proportion of team knowledge cliques, all after controlling for shared team knowledge. Network analysis helps to represent, measure, and understand the relationship of team knowledge to outcomes of interest to team researchers, members, and managers. Our approach complements existing team knowledge measures. Researchers and managers can apply network concepts and measures to help understand where team knowledge is held within a team and how this relational structure may influence team coordination, cohesion, and performance.

  9. Longitudinally Jointed Edge-wise Compression Honeycomb Composite Sandwich Coupon Testing and FE Analysis: Three Methods of Strain Measurement, and Comparison

    NASA Technical Reports Server (NTRS)

    Farrokh, Babak; AbdulRahim, Nur Aida; Segal, Ken; Fan, Terry; Jones, Justin; Hodges, Ken; Mashni, Noah; Garg, Naman; Sang, Alex; Gifford, Dawn; hide

    2013-01-01

    Three means (i.e., typical foil strain gages, fiber optic sensors, and a digital image correlation (DIC) system) were implemented to measure strains on the back and front surfaces of a longitudinally jointed curved test article subjected to edge-wise compression testing, at NASA Goddard Space Flight Center, according to ASTM C364. The Pre-test finite element analysis (FEA) was conducted to assess ultimate failure load and predict strain distribution pattern throughout the test coupon. The predicted strain pattern contours were then utilized as guidelines for installing the strain measurement instrumentations. The strain gages and fiber optic sensors were bonded on the specimen at locations with nearly the same strain values, as close as possible to each other, so that, comparisons between the measured strains by strain gages and fiber optic sensors, as well as the DIC system are justified. The test article was loaded to failure (at approximately 38 kips), at the strain value of approximately 10,000mu epsilon As a part of this study, the validity of the measured strains by fiber optic sensors is examined against the strain gage and DIC data, and also will be compared with FEA predictions.

  10. Measurements of Enthalpy of Sublimation of Ne, N2, O2, Ar, CO2, Kr, Xe, and H2O using a Double Paddle Oscillator.

    PubMed

    Shakeel, Hamza; Wei, Haoyan; Pomeroy, Joshua M

    2018-03-01

    We report precise experimental values of the enthalpy of sublimation (Δ H s ) of quenched condensed films of neon (Ne), nitrogen (N 2 ), oxygen (O 2 ), argon (Ar), carbon dioxide (CO 2 ), krypton (Kr), xenon (Xe), and water (H 2 O) vapor using a single consistent measurement platform. The experiments are performed well below the triple point temperature of each gas and fall in the temperature range where existing experimental data is very limited. A 6 cm 2 and 400 µm thick double paddle oscillator (DPO) with high quality factor (Q ≈ 4 × 10 5 at 298K) and high frequency stability (33 parts per billion) is utilized for the measurements. The enthalpies of sublimation are derived by measuring the rate of mass loss during temperature programmed desorption. The mass change is detected due to change in the resonance frequency of the self-tracking oscillator. Our measurements typically remain within 10% of the available literature, theory, and National Institute of Standards and Technology (NIST) Web Thermo Tables ( WTT ) values, but are performed using an internally consistent method across different gases.

  11. Simulating the production and dispersion of environmental pollutants in aerosol phase in an urban area of great historical and cultural value.

    PubMed

    Librando, Vito; Tringali, Giuseppe; Calastrini, Francesca; Gualtieri, Giovanni

    2009-11-01

    Mathematical models were developed to simulate the production and dispersion of aerosol phase atmospheric pollutants which are the main cause of the deterioration of monuments of great historical and cultural value. This work focuses on Particulate Matter (PM) considered the primary cause of monument darkening. Road traffic is the greatest contributor to PM in urban areas. Specific emission and dispersion models were used to study typical urban configurations. The area selected for this study was the city of Florence, a suitable test bench considering the magnitude of architectural heritage together with the remarkable effect of the PM pollution from road traffic. The COPERT model, to calculate emissions, and the street canyon model coupled with the CALINE model, to simulate pollutant dispersion, were used. The PM concentrations estimated by the models were compared to actual PM concentration measurements, as well as related to the trend of some meteorological variables. The results obtained may be defined as very encouraging even the models correlated poorly: the estimated concentration trends as daily averages moderately reproduce the same trends of the measured values.

  12. Variation in biogeochemical parameters across intertidal seagrass meadows in the central Great Barrier Reef region.

    PubMed

    Mellors, Jane; Waycott, Michelle; Marsh, Helene

    2005-01-01

    This survey provides baseline information on sediment characteristics, porewater, adsorbed and plant tissue nutrients from intertidal coastal seagrass meadows in the central region of the Great Barrier Reef World Heritage Area. Data collected from 11 locations, representative of intertidal coastal seagrass beds across the region, indicated that the chemical environment was typical of other tropical intertidal areas. Results using two different extraction methods highlight the need for caution when choosing an adsorbed phosphate extraction technique, as sediment type affects the analytical outcome. Comparison with published values indicates that the range of nutrient parameters measured is equivalent to those measured across tropical systems globally. However, the nutrient values in seagrass leaves and their molar ratios for Halophila ovalis and Halodule uninervis were much higher than the values from the literature from this and other regions, obtained using the same techniques, suggesting that these species act as nutrient sponges, in contrast with Zostera capricorni. The limited historical data from this region suggest that the nitrogen and phosphorus content of seagrass leaves has increased since the 1970s concomitant with changing land use practice.

  13. System for characterizing semiconductor materials and photovoltaic devices through calibration

    DOEpatents

    Sopori, Bhushan L.; Allen, Larry C.; Marshall, Craig; Murphy, Robert C.; Marshall, Todd

    1998-01-01

    A method and apparatus for measuring characteristics of a piece of material, typically semiconductor materials including photovoltaic devices. The characteristics may include dislocation defect density, grain boundaries, reflectance, external LBIC, internal LBIC, and minority carrier diffusion length. The apparatus includes a light source, an integrating sphere, and a detector communicating with a computer. The measurement or calculation of the characteristics is calibrated to provide accurate, absolute values. The calibration is performed by substituting a standard sample for the piece of material, the sample having a known quantity of one or more of the relevant characteristics. The quantity measured by the system of the relevant characteristic is compared to the known quantity and a calibration constant is created thereby.

  14. System for characterizing semiconductor materials and photovoltaic devices through calibration

    DOEpatents

    Sopori, B.L.; Allen, L.C.; Marshall, C.; Murphy, R.C.; Marshall, T.

    1998-05-26

    A method and apparatus are disclosed for measuring characteristics of a piece of material, typically semiconductor materials including photovoltaic devices. The characteristics may include dislocation defect density, grain boundaries, reflectance, external LBIC, internal LBIC, and minority carrier diffusion length. The apparatus includes a light source, an integrating sphere, and a detector communicating with a computer. The measurement or calculation of the characteristics is calibrated to provide accurate, absolute values. The calibration is performed by substituting a standard sample for the piece of material, the sample having a known quantity of one or more of the relevant characteristics. The quantity measured by the system of the relevant characteristic is compared to the known quantity and a calibration constant is created thereby. 44 figs.

  15. High precision isotope ratio measurements of mercury isotopes in cinnabar ores using multi-collector inductively coupled plasma mass spectrometry.

    PubMed

    Hintelmann, Holger; Lu, ShengYong

    2003-06-01

    Variations in Hg isotope ratios in cinnabar ores obtained from different countries were detected by high precision isotope ratio measurements using multi-collector inductively coupled mass spectrometry (MC-ICP-MS). Values of delta198/202Hg varied from 0.0-1.3 percent per thousand relative to a NIST SRM 1641d Hg solution. The typical external uncertainty of the delta values was 0.06 to 0.26 percent per thousand. Hg was introduced into the plasma as elemental Hg after reduction by sodium borohydride. A significant fractionation of lead isotopes was observed during the simultaneous generation of lead hydride, preventing normalization of the Hg isotope ratios using the measured 208/206Pb ratio. Hg ratios were instead corrected employing the simultaneously measured 205/203T1 ratio. Using a 10 ng ml(-1) Hg solution and 10 min of sampling, introducing 60 ng of Hg, the internal precision of the isotope ratio measurements was as low as 14 ppm. Absolute Hg ratios deviated from the representative IUPAC values by approximately 0.2% per u. This observation is explained by the inadequacy of the exponential law to correct for mass bias in MC-ICP-MS measurements. In the absence of a precisely characterized Hg isotope ratio standard, we were not able to determine unambiguously the absolute Hg ratios of the ore samples, highlighting the urgent need for certified standard materials.

  16. Body condition predicts energy stores in apex predatory sharks

    PubMed Central

    Gallagher, Austin J.; Wagner, Dominique N.; Irschick, Duncan J.; Hammerschlag, Neil

    2014-01-01

    Animal condition typically reflects the accumulation of energy stores (e.g. fatty acids), which can influence an individual's decision to undertake challenging life-history events, such as migration and reproduction. Accordingly, researchers often use measures of animal body size and/or weight as an index of condition. However, values of condition, such as fatty acid levels, may not always reflect the physiological state of animals accurately. While the relationships between condition indices and energy stores have been explored in some species (e.g. birds), they have yet to be examined in top predatory fishes, which often undertake extensive and energetically expensive migrations. We used an apex predatory shark (Galeocerdo cuvier, the tiger shark) as a model species to evaluate the relationship between triglycerides (energy metabolite) and a metric of overall body condition. We captured, blood sampled, measured and released 28 sharks (size range 125–303 cm pre-caudal length). In the laboratory, we assayed each plasma sample for triglyceride values. We detected a positive and significant relationship between condition and triglyceride values (P < 0.02). This result may have conservation implications if the largest and highest-condition sharks are exploited in fisheries, because these individuals are likely to have the highest potential for successful reproduction. Our results suggest that researchers may use either plasma triglyceride values or an appropriate measure of body condition for assessing health in large sharks. PMID:27293643

  17. Development of a Field-Deployable Methane Carbon Isotope Analyzer

    NASA Astrophysics Data System (ADS)

    Dong, Feng; Baer, Douglas

    2010-05-01

    Methane is a potent greenhouse gas, whose atmospheric surface mixing ratio has almost doubled compared with preindustrial values. Methane can be produced by biogenic processes, thermogenic processes or biomass, with different isotopic signatures. As a key molecule involved in the radiative forcing in the atmosphere, methane is thus one of the most important molecules linking the biosphere and atmosphere. Therefore precise measurements of mixing ratios and isotopic compositions will help scientists to better understand methane sources and sinks. To date, high precision isotope measurements have been exclusively performed with conventional isotope ratio mass spectrometry, which involves intensive labor and is not readily field deployable. Optical studies using infrared laser spectroscopy have also been reported to measure the isotopic ratios. However, the precision of optical-based analyses, to date, is typically unsatisfactory without pre-concentration procedures. We present characterization of the performance of a portable Methane Carbon Isotope Analyzer (MCIA), based on cavity enhanced laser absorption spectroscopy technique, that provides in-situ measurements of the carbon isotope ratio (13C/12C or del_13C) and methane mixing ratio (CH4). The sample is introduced to the analyzer directly without any requirement for pretreatment or preconcentration. A typical precision of less than 1 per mill (< 0.1%) with a 10-ppm methane sample can be achieved in a measurement time of less than 100 seconds. The MCIA can report carbon isotope ratio and concentration measurements over a very wide range of methane concentrations. Results of laboratory tests and field measurements will be presented.

  18. Color measurement of plastics - From compounding via pelletizing, up to injection molding and extrusion

    NASA Astrophysics Data System (ADS)

    Botos, J.; Murail, N.; Heidemeyer, P.; Kretschmer, K.; Ulmer, B.; Zentgraf, T.; Bastian, M.; Hochrein, T.

    2014-05-01

    The typical offline color measurement on injection molded or pressed specimens is a very expensive and time-consuming process. In order to optimize the productivity and quality, it is desirable to measure the color already during the production. Therefore several systems have been developed to monitor the color e.g. on melts, strands, pellets, the extrudate or injection molded part already during the process. Different kinds of inline, online and atline methods with their respective advantages and disadvantages will be compared. The criteria are e.g. the testing time, which ranges from real-time to some minutes, the required calibration procedure, the spectral resolution and the final measuring precision. The latter ranges between 0.05 to 0.5 in the CIE L*a*b* system depending on the particular measurement system. Due to the high temperatures in typical plastics processes thermochromism of polymers and dyes has to be taken into account. This effect can influence the color value in the magnitude of some 10% and is barely understood so far. Different suitable methods to compensate thermochromic effects during compounding or injection molding by using calibration curves or artificial neural networks are presented. Furthermore it is even possible to control the color during extrusion and compounding almost in real-time. The goal is a specific developed software for adjusting the color recipe automatically with the final objective of a closed-loop control.

  19. Surface temperatures and glassy state investigations in tribology, part 1

    NASA Technical Reports Server (NTRS)

    Winer, W. O.; Sanborn, D. M.

    1978-01-01

    The research in this report is divided into two categories: (1) lubricant rheological behavior, and (2) thermal behavior of a simulated elastohydrodynamic contact. The studies of the lubricant rheological behavior consists of high pressure, low shear rate viscosity measurements, viscoelastic transition measurements, by volume dilatometry, dielectric transitions at atmospheric pressure and light scattering transitions. Lubricant shear stress-strain behavior in the amorphous glassy state was measured on several fluids. It appears clear from these investigations that many lubricants undergo viscoplastic transitions in typical EHD contacts and that the lubricant has a limiting maximum shear stress it can support which in turn will determine the traction in the contact except in cases of very low slide-roll ratio. Surface temperature measurements were made for a naphthenic mineral oil and a polyphenyl ether. The maximum surface temperature in these experiments was approximately symmetrical about the zero slide-roll ration except for absolute values of slide-roll ratio greater than about 0.9. Additional surface temperature measurements were made in contacts with rough surfaces where the composite surface roughness was approximately equal to the EHD film thickness. A regression analysis was done to obtain a predictive equation for surface temperatures as a function of pressure, sliding speed, and surface roughness. A correction factor for surface roughness effects to the typical flash temperature analysis was found.

  20. Electrical-transport properties and microwave device performance of sputtered TlCaBaCuO superconducting thin films

    NASA Technical Reports Server (NTRS)

    Subramanyam, G.; Kapoor, V. J.; Chorey, C. M.; Bhasin, K. B.

    1992-01-01

    The paper describes the processing and electrical transport measurements for achieving reproducible high-Tc and high-Jc sputtered TlCaBaCuO thin films on LaAlO3 substrates, for microelectronic applications. The microwave properties of TlCaBaCuO thin films were investigated by designing, fabricating, and characterizing microstrip ring resonators with a fundamental resonance frequency of 12 GHz on 10-mil-thick LaAlO3 substrates. Typical unloaded quality factors for a ring resonator with a superconducting ground plane of 0.3 micron-thickness and a gold ground plane of 1-micron-thickness were above 1500 at 65 K. Typical values of penetration depth at 0 K in the TlCaBaCuO thin films were between 7000 and 8000 A.

  1. Application of Blue Laser Triangulation Sensors for Displacement Measurement Through Fire.

    PubMed

    Hoehler, Matthew S; Smith, Christopher M

    2016-11-01

    This paper explores the use of blue laser triangulation sensors to measure displacement of a target located behind or in the close proximity of natural gas diffusion flames. This measurement is critical for providing high-quality data in structural fire tests. The position of the laser relative to the flame envelope can significantly affect the measurement scatter, but has little influence on the mean values. We observe that the measurement scatter is normally distributed and increases linearly with the distance of the target from the flame along the beam path. Based on these observations, we demonstrate how time-averaging can be used to achieve a standard uncertainty associated with the displacement error of less than 0.1 mm, which is typically sufficient for structural fire testing applications. Measurements with the investigated blue laser sensors were not impeded by the thermal radiation emitted from the flame or the soot generated from the relatively clean-burning natural gas.

  2. The theoretical ultimate magnetoelectric coefficients of magnetoelectric composites by optimization design

    NASA Astrophysics Data System (ADS)

    Wang, H.-L.; Liu, B.

    2014-03-01

    This paper investigates what is the largest magnetoelectric (ME) coefficient of ME composites, and how to realize it. From the standpoint of energy conservation, a theoretical analysis is carried out on an imaginary lever structure consisting of a magnetostrictive phase, a piezoelectric phase, and a rigid lever. This structure is a generalization of various composite layouts for optimization on ME effect. The predicted theoretical ultimate ME coefficient plays a similar role as the efficiency of ideal heat engine in thermodynamics, and is used to evaluate the existing typical ME layouts, such as the parallel sandwiched layout and the serial layout. These two typical layouts exhibit ME coefficient much lower than the theoretical largest values, because in the general analysis the stress amplification ratio and the volume ratio can be optimized independently and freely, but in typical layouts they are dependent or fixed. To overcome this shortcoming and achieve the theoretical largest ME coefficient, a new design is presented. In addition, it is found that the most commonly used electric field ME coefficient can be designed to be infinitely large. We doubt the validity of this coefficient as a reasonable ME effect index and consider three more ME coefficients, namely the electric charge ME coefficient, the voltage ME coefficient, and the static electric energy ME coefficient. We note that the theoretical ultimate value of the static electric energy ME coefficient is finite and might be a more proper measure of ME effect.

  3. The theoretical ultimate magnetoelectric coefficients of magnetoelectric composites by optimization design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.-L.; Liu, B., E-mail: liubin@tsinghua.edu.cn

    2014-03-21

    This paper investigates what is the largest magnetoelectric (ME) coefficient of ME composites, and how to realize it. From the standpoint of energy conservation, a theoretical analysis is carried out on an imaginary lever structure consisting of a magnetostrictive phase, a piezoelectric phase, and a rigid lever. This structure is a generalization of various composite layouts for optimization on ME effect. The predicted theoretical ultimate ME coefficient plays a similar role as the efficiency of ideal heat engine in thermodynamics, and is used to evaluate the existing typical ME layouts, such as the parallel sandwiched layout and the serial layout.more » These two typical layouts exhibit ME coefficient much lower than the theoretical largest values, because in the general analysis the stress amplification ratio and the volume ratio can be optimized independently and freely, but in typical layouts they are dependent or fixed. To overcome this shortcoming and achieve the theoretical largest ME coefficient, a new design is presented. In addition, it is found that the most commonly used electric field ME coefficient can be designed to be infinitely large. We doubt the validity of this coefficient as a reasonable ME effect index and consider three more ME coefficients, namely the electric charge ME coefficient, the voltage ME coefficient, and the static electric energy ME coefficient. We note that the theoretical ultimate value of the static electric energy ME coefficient is finite and might be a more proper measure of ME effect.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer; Clifton, Andrew; Bonin, Timothy

    As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote-sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote-sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote-sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards for quantifying remote sensing device uncertainty for power performance testing considermore » uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device and are generally fixed, leading to climatic uncertainty values that apply to the entire measurement campaign. However, real-world experience and a consideration of the fundamentals of the measurement process have shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we describe the development of a new dynamic lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from a field measurement site to assess the ability of the framework to predict errors in lidar-measured wind speed. The results show how uncertainty varies over time and can be used to help select data with different levels of uncertainty for different applications, for example, low uncertainty data for power performance testing versus all data for plant performance monitoring.« less

  5. Optimizing skin protection with semipermeable gloves.

    PubMed

    Wulfhorst, Britta; Schwanitz, Hans Joachim; Bock, Meike

    2004-12-01

    Occlusion due to gloves is one important cause of glove irritation. Macerated softened skin gives poor protection against microbes and chemical injuries. The introduction of a breathable protective glove material would represent a significant step toward improved prevention of occupational skin disease. Performance levels of semipermeable and occlusive gloves were examined under conditions typical of the hairdressing profession. In two studies, tests comparing breathable semipermeable gloves to single-use gloves made of occlusive materials were conducted. In an initial study, a user survey was carried out in conjunction with bioengineering examinations. Values at baseline and values after gloves were worn were recorded by measuring transepidermal water loss (TEWL), skin humidity (SH), and skin surface hydrogen ion concentration (pH) in 20 healthy volunteers. In a second study, the gloves were tested for penetrability and permeability with three chemical compounds typically used in the hairdressing profession. Bioengineering examination objectively confirmed users' reports of reduced hand perspiration when semipermeable gloves were worn. The TEWL, SH, and skin surface pH values remained largely stable after 20 minutes of wearing semipermeable gloves, in contrast to the reactions observed with gloves of occlusive materials. Permeability tests indicated that the semipermeable material is effective, with some restrictions. Air leakage testing revealed that all 50 gloves tested were not airtight. Following the optimization of manufacturing methods, additional tests of the penetrability of semipermeable gloves will be necessary.

  6. The presence of radioactive materials in soil, sand and sediment samples of Potenga sea beach area, Chittagong, Bangladesh: Geological characteristics and environmental implication

    NASA Astrophysics Data System (ADS)

    Yasmin, Sabina; Barua, Bijoy Sonker; Uddin Khandaker, Mayeen; Kamal, Masud; Abdur Rashid, Md.; Abdul Sani, S. F.; Ahmed, H.; Nikouravan, Bijan; Bradley, D. A.

    2018-03-01

    Accurate quantification of naturally occurring radioactive materials in soil provides information on geological characteristics, possibility of petroleum and mineral exploration, radiation hazards to the dwelling populace etc. Of practical significance, the earth surface media (soil, sand and sediment) collected from the densely populated coastal area of Chittagong city, Bangladesh were analysed using a high purity germanium γ-ray spectrometer with low background radiation environment. The mean activities of 226Ra (238U), 232Th and 40K in the studied materials show higher values than the respective world average of 33, 36 and 474 Bq/kg reported by the UNSCEAR (2000). The deduced mass concentrations of the primordial radionuclides 238U, 232Th and 40K in the investigated samples are corresponding to the granite rocks, crustal minerals and typical rocks respectively. The estimated mean value of 232Th/238U for soil (3.98) and sediment (3.94) are in-line with the continental crustal average concentration of 3.82 for typical rock range reported by the National Council on Radiation Protection and Measurements (NCRP). But the tonalites and more silicic rocks elevate the mean value of 232Th/238U for sand samples amounting to 4.69. This indicates a significant fractionation during weathering or associated with the metasomatic activity in the investigated area of sand collection.

  7. Equilibrium 2H/1H fractionation in organic molecules: III. Cyclic ketones and hydrocarbons

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Sessions, Alex L.; Nielsen, Robert J.; Goddard, William A.

    2013-04-01

    Quantitative interpretation of stable hydrogen isotope ratios (2H/1H) in organic compounds is greatly aided by knowledge of the relevant equilibrium fractionation factors (ɛeq). Previous efforts have combined experimental measurements and hybrid Density Functional Theory (DFT) calculations to accurately predict equilibrium fractionations in linear (acyclic) organic molecules (Wang et al., 2009a,b), but the calibration produced by that study is not applicable to cyclic compounds. Here we report experimental measurements of equilibrium 2H/1H fractionation in six cyclic ketones, and use those data to evaluate DFT calculations of fractionation in diverse monocyclic and polycyclic compounds commonly found in sedimentary organic matter and petroleum. At 25, 50, and 75 °C, the experimentally measured ɛeq values for secondary and tertiary Hα in isotopic equilibrium with water are in the ranges of -130‰ to -150‰ and +10‰ to -40‰ respectively. Measured data are similar to DFT calculations of ɛeq for axial Hα but not equatorial Hα. In tertiary Cα positions with methyl substituents, this can be understood as a result of the methyl group forcing Hα atoms into a dominantly axial position. For secondary Cα positions containing both axial and equatorial Hα atoms, we propose that axial Hα exchanges with water significantly faster than the equatorial Hα does, due to the hyperconjugation-stabilized transition state. Interconversion of axial and equatorial positions via ring flipping is much faster than isotopic exchange at either position, and as a result the steady-state isotopic composition of both H's is strongly weighted toward that of axial Hα. Based on comparison with measured ɛeq values, a total uncertainty of 10-30‰ remains for theoretical ɛeq values. Using DFT, we systematically estimated the ɛeq values for individual H positions in various cyclic structures. By summing over all individual H positions, the molecular equilibrium fractionation was estimated to be -75‰ to -95‰ for steroids, -90‰ to -105‰ for hopanoids, and -65‰ to -100‰ for typical cycloparaffins between 0 and 100 °C relative to water. These are distinct from the typical biosynthetic fractionations of -150‰ to -300‰, but are similar to equilibrium fractionations for linear hydrocarbons (Wang et al., 2009b). Thus post-burial H exchange will generally remove the ˜50-100‰ biosynthetic fractionations between cyclic isoprenoid and n-alkyl lipid molecules, which can be used to evaluate the extent of H exchange in sedimentary organic matter and oils.

  8. Stability enhanced, repeatability improved Parylene-C passivated on QCM sensor for aPTT measurement.

    PubMed

    Yang, Yuchen; Zhang, Wei; Guo, Zhen; Zhang, Zhiqi; Zhu, Hongnan; Yan, Ruhong; Zhou, Lianqun

    2017-12-15

    Determination of blood clotting time is essential in monitoring therapeutic anticoagulants. In this work, Parylene-C passivated on quartz crystal microbalance (P-QCM) was developed for the activated partial thromboplastin time (aPTT) measurement. Compared with typical QCM, P-QCM possessed a hydrophobic surface and sensitive frequency response to viscoelastic variations on electrode surface. Fibrin could be adsorbed effectively, due to the hydrophobicity of the P-QCM surface. Comparing with typical QCM, the peak-to-peak value (PPV) of P-QCM was increased by 1.94% ± 0.63%, which indicated enhancement of signal-to-noise ratio. For P-QCM, the coefficient of variation (CV) of frequency decrease and aPTT were 2.58% and 1.24% separately, which demonstrated improvement of stability and reproducibility. Moreover, compared with SYSMEX CS 2000i haematology analyzer, clinical coefficient index (R 2 ) was 0.983. In conclusion, P-QCM exhibited potential for improving stability, reproducibility and linearity of piezoelectric sensors, and might be more promising for point of care testing (POCT) applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Biological thresholds of nitrogen and phosphorus in a typical urban river system of the Yangtz delta, China.

    PubMed

    Liang, Xinqiang; Zhu, Sirui; Ye, Rongzhong; Guo, Ru; Zhu, Chunyan; Fu, Chaodong; Tian, Guangming; Chen, Yingxu

    2014-09-01

    River health and associated risks are fundamentally dependent on the levels of the primary productivities, i.e., sestonic and benthic chlorophyll-a. We selected a typical urban river system of the Yangtz delta to investigate nutrient and non-nutrient responses of chlorophyll-a contents and to determine biological thresholds of N and P. Results showed the mean contents of sestonic and benthic chlorophyll-a across all sampling points reached 10.2 μg L(-1) and 149.3 mg m(-2). The self-organized mapping analysis suggested both chlorophyll-a contents clearly responded to measurements of N, P, and water temperature. Based on the chlorophyll-a criteria for fresh water and measured variables, we recommend the biological thresholds of N and P for our river system be set at 2.4 mg N L(-1) and 0.2 mg P L(-1), and these be used as initial nutrient reference values for local river managers to implement appropriate strategies to alleviate nutrient loads and trophic status. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Social behavior correlates of cortisol activity in child care: gender differences and time-of-day effects.

    PubMed

    Tout, K; de Haan, M; Campbell, E K; Gunnar, M R

    1998-10-01

    The relations between social behavior and daily patterns of a stress-sensitive hormone production were examined in preschool children (N = 75) attending center-based child care. Three behavioral dimensions, shy/anxious/internalizing, angry/aggressive/externalizing, and social competence, were assessed by teacher report and classroom observation, and their relations with 2 measures of cortisol activity, median (or typical) levels and reactivity (quartile range score between second and third quartile values) were explored. Cortisol-behavior relations differed by gender: significant associations were found for boys but not for girls. Specifically, for boys externalizing behavior was positively associated with cortisol reactivity, while internalizing behavior was negatively associated with median cortisol. Time of day of cortisol measurement affected the results. Surprisingly, median cortisol levels rose from morning to afternoon, a pattern opposite to that of the typical circadian rhythm of cortisol. This rise in cortisol over the day was positively correlated with internalizing behavior for boys. The methodological and theoretical implications of these findings for the study of the development of hormone-behavior relations are discussed.

  11. Test Characteristics of Neck Fullness and Witnessed Neck Pulsations in the Diagnosis of Typical AV Nodal Reentrant Tachycardia

    PubMed Central

    Sakhuja, Rahul; Smith, Lisa M; Tseng, Zian H; Badhwar, Nitish; Lee, Byron K; Lee, Randall J; Scheinman, Melvin M; Olgin, Jeffrey E; Marcus, Gregory M

    2011-01-01

    Summary Background Claims in the medical literature suggest that neck fullness and witnessed neck pulsations are useful in the diagnosis of typical AV nodal reentrant tachycardia (AVNRT). Hypothesis Neck fullness and witnessed neck pulsations have a high positive predictive value in the diagnosis of typical AVNRT. Methods We performed a cross sectional study of consecutive patients with palpitations presenting to a single electrophysiology (EP) laboratory over a 1 year period. Each patient underwent a standard questionnaire regarding neck fullness and/or witnessed neck pulsations during their palpitations. The reference standard for diagnosis was determined by electrocardiogram and invasive EP studies. Results Comparing typical AVNRT to atrial fibrillation (AF) or atrial flutter (AFL) patients, the proportions with neck fullness and witnessed neck pulsations did not significantly differ: in the best case scenario (using the upper end of the 95% confidence interval [CI]), none of the positive or negative predictive values exceeded 79%. After restricting the population to those with supraventricular tachycardia other than AF or AFL (SVT), neck fullness again exhibited poor test characteristics; however, witnessed neck pulsations exhibited a specificity of 97% (95% CI 90–100%) and a positive predictive value of 83% (95% CI 52–98%). After adjustment for potential confounders, SVT patients with witnessed neck pulsations had a 7 fold greater odds of having typical AVNRT, p=0.029. Conclusions Although neither neck fullness nor witnessed neck pulsations are useful in distinguishing typical AVNRT from AF or AFL, witnessed neck pulsations are specific for the presence of typical AVNRT among those with SVT. PMID:19479968

  12. Disgust sensitivity and the 'non-rational' aspects of a career choice in surgery.

    PubMed

    Consedine, Nathan S; Yu, Tzu-Chieh; Hill, Andrew G; Windsor, John A

    2013-03-15

    Fitting trainee physicians to career paths remains an ongoing challenge in a highly fluid health workforce environment. Studies attempting to explain low interest in surgical careers have typically examined the relative impact of career and lifestyle values. The current work argues that emotional proclivities are potentially more important and that disgust sensitivity may help explain both low surgical interest as well as the tendency for female students to avoid surgical careers. 216 medical students attending a required course in human behaviour completed measures of career intention, traditional predictors of career intention and dispositional disgust sensitivity. As predicted, logistic regression showed that greater disgust sensitivity predicted lower surgical career intention even when controlling for traditional career values (OR=0.45, 95%CI=0.21-0.95). Additionally, the gender effect indexing low female interest in surgical careers was no longer significant once disgust sensitivity was added to the model. The impact of disgust sensitivity on surgical interest was substantial and on par with established predictors of career intention. Disgust sensitivity may represent a potentially modifiable factor impacting surgical career choice, particularly among female students who are typically more disgust sensitive.

  13. Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.

    PubMed

    Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O

    2017-08-01

    To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.

  14. Two ways to model voltage current curves of adiabatic MgB2 wires

    NASA Astrophysics Data System (ADS)

    Stenvall, A.; Korpela, A.; Lehtonen, J.; Mikkonen, R.

    2007-08-01

    Usually overheating of the sample destroys attempts to measure voltage-current curves of conduction cooled high critical current MgB2 wires at low temperatures. Typically, when a quench occurs a wire burns out due to massive heat generation and negligible cooling. It has also been suggested that high n values measured with MgB2 wires and coils are not an intrinsic property of the material but arise due to heating during the voltage-current measurement. In addition, quite recently low n values for MgB2 wires have been reported. In order to find out the real properties of MgB2 an efficient computational model is required to simulate the voltage-current measurement. In this paper we go back to basics and consider two models to couple electromagnetic and thermal phenomena. In the first model the magnetization losses are computed according to the critical state model and the flux creep losses are considered separately. In the second model the superconductor resistivity is described by the widely used power law. Then the coupled current diffusion and heat conduction equations are solved with the finite element method. In order to compare the models, example runs are carried out with an adiabatic slab. Both models produce a similar significant temperature rise near the critical current which leads to fictitiously high n values.

  15. Output order reflects the cognitive accessibility of goals.

    PubMed

    Grimes, Carrie E; Nes, Lise Solberg; Waldman, Andrea; Segerstrom, Suzanne C

    2012-01-01

    Goal accessibility--the ease or speed with which a goal is activated--increases the likelihood that it will be acted on. The present studies validate output order as a measure of goal accessibility that can be applied to goal lists in both laboratory and naturalistic settings. In three studies, output order (the order in which goals are listed in a free-response format) was related to self-reported goal value but was not redundant with it. Furthermore, output order was affected by motivational priming whereas value was not, and order associated with typical student goals (e.g., achievement) compared with atypical goals (e.g., power). Output order is well suited to bring the study of accessibility to naturalistic studies of goals and goal pursuit.

  16. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  17. Thermal Properties of Capparis Decidua (ker) Fiber Reinforced Phenol Formaldehyde Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, G. P.; Mangal, Ravindra; Bhojak, N.

    2010-06-29

    Simultaneous measurement of effective thermal conductivity ({lambda}), effective thermal diffusivity ({kappa}) and specific heat of Ker fiber reinforced phenol formaldehyde composites have been studied by transient plane source (TPS) technique. The samples of different weight percentage typically (5, 10, 15, 20 and 25%) have been taken. It is found that values of effective thermal conductivity and effective thermal diffusivity of the composites decrease, as compared to pure phenol formaldehyde, as the fraction of fiber loading increases. Experimental data is fitted on Y. Agari model. Values of thermal conductivity of composites are calculated with two models (Rayleigh, Maxwell and Meredith-Tobias model).more » Good agreement between theoretical and experimental result has been found.« less

  18. Application of laser Doppler velocimeter to chemical vapor laser system

    NASA Technical Reports Server (NTRS)

    Gartrell, Luther R.; Hunter, William W., Jr.; Lee, Ja H.; Fletcher, Mark T.; Tabibi, Bagher M.

    1993-01-01

    A laser Doppler velocimeter (LDV) system was used to measure iodide vapor flow fields inside two different-sized tubes. Typical velocity profiles across the laser tubes were obtained with an estimated +/-1 percent bias and +/-0.3 to 0.5 percent random uncertainty in the mean values and +/-2.5 percent random uncertainty in the turbulence-intensity values. Centerline velocities and turbulence intensities for various longitudinal locations ranged from 13 to 17.5 m/sec and 6 to 20 percent, respectively. In view of these findings, the effects of turbulence should be considered for flow field modeling. The LDV system provided calibration data for pressure and mass flow systems used routinely to monitor the research laser gas flow velocity.

  19. No Measured Effect of a Familiar Contextual Object on Color Constancy.

    PubMed

    Kanematsu, Erika; Brainard, David H

    2014-08-01

    Some familiar objects have a typical color, such as the yellow of a banana. The presence of such objects in a scene is a potential cue to the scene illumination, since the light reflected from them should on average be consistent with their typical surface reflectance. Although there are many studies on how the identity of an object affects how its color is perceived, little is known about whether the presence of a familiar object in a scene helps the visual system stabilize the color appearance of other objects with respect to changes in illumination. We used a successive color matching procedure in three experiments designed to address this question. Across the experiments we studied a total of 6 subjects (2 in Experiment 1, 3 in Experiment 2, and 4 in Experiment 3) with partial overlap of subjects between experiments. We compared measured color constancy across conditions in which a familiar object cue to the illuminant was available with conditions in which such a cue was not present. Overall, our results do not reveal a reliable improvement in color constancy with the addition of a familiar object to a scene. An analysis of the experimental power of our data suggests that if there is such an effect, it is small: less than approximately a change of 0.09 in a constancy index where an absence of constancy corresponds to an index value of 0 and perfect constancy corresponds to an index value of 1.

  20. No Measured Effect of a Familiar Contextual Object on Color Constancy

    PubMed Central

    Kanematsu, Erika; Brainard, David H.

    2013-01-01

    Some familiar objects have a typical color, such as the yellow of a banana. The presence of such objects in a scene is a potential cue to the scene illumination, since the light reflected from them should on average be consistent with their typical surface reflectance. Although there are many studies on how the identity of an object affects how its color is perceived, little is known about whether the presence of a familiar object in a scene helps the visual system stabilize the color appearance of other objects with respect to changes in illumination. We used a successive color matching procedure in three experiments designed to address this question. Across the experiments we studied a total of 6 subjects (2 in Experiment 1, 3 in Experiment 2, and 4 in Experiment 3) with partial overlap of subjects between experiments. We compared measured color constancy across conditions in which a familiar object cue to the illuminant was available with conditions in which such a cue was not present. Overall, our results do not reveal a reliable improvement in color constancy with the addition of a familiar object to a scene. An analysis of the experimental power of our data suggests that if there is such an effect, it is small: less than approximately a change of 0.09 in a constancy index where an absence of constancy corresponds to an index value of 0 and perfect constancy corresponds to an index value of 1. PMID:25313267

  1. Motivational Basis of Personality Traits: A Meta-Analysis of Value-Personality Correlations.

    PubMed

    Fischer, Ronald; Boer, Diana

    2015-10-01

    We investigated the relationships between personality traits and basic value dimensions. Furthermore, we developed novel country-level hypotheses predicting that contextual threat moderates value-personality trait relationships. We conducted a three-level v-known meta-analysis of correlations between Big Five traits and Schwartz's (1992) 10 values involving 9,935 participants from 14 countries. Variations in contextual threat (measured as resource threat, ecological threat, and restrictive social institutions) were used as country-level moderator variables. We found systematic relationships between Big Five traits and human values that varied across contexts. Overall, correlations between Openness traits and the Conservation value dimension and Agreeableness traits and the Transcendence value dimension were strongest across all samples. Correlations between values and all personality traits (except Extraversion) were weaker in contexts with greater financial, ecological, and social threats. In contrast, stronger personality-value links are typically found in contexts with low financial and ecological threats and more democratic institutions and permissive social context. These effects explained on average more than 10% of the variability in value-personality correlations. Our results provide strong support for systematic linkages between personality and broad value dimensions, but they also point out that these relations are shaped by contextual factors. © 2014 Wiley Periodicals, Inc.

  2. Technical note: False low turbidity readings from optical probes during high suspended-sediment concentrations

    NASA Astrophysics Data System (ADS)

    Voichick, Nicholas; Topping, David J.; Griffiths, Ronald E.

    2018-03-01

    Turbidity, a measure of water clarity, is monitored for a variety of purposes including (1) to help determine whether water is safe to drink, (2) to establish background conditions of lakes and rivers and detect pollution caused by construction projects and stormwater discharge, (3) to study sediment transport in rivers and erosion in catchments, (4) to manage siltation of water reservoirs, and (5) to establish connections with aquatic biological properties, such as primary production and predator-prey interactions. Turbidity is typically measured with an optical probe that detects light scattered from particles in the water. Probes have defined upper limits of the range of turbidity that they can measure. The general assumption is that when turbidity exceeds this upper limit, the values of turbidity will be constant, i.e., the probe is pegged; however, this assumption is not necessarily valid. In rivers with limited variation in the physical properties of the suspended sediment, at lower suspended-sediment concentrations, an increase in suspended-sediment concentration will cause a linear increase in turbidity. When the suspended-sediment concentration in these rivers is high, turbidity levels can exceed the upper measurement limit of an optical probe and record a constant pegged value. However, at extremely high suspended-sediment concentrations, optical turbidity probes do not necessarily stay pegged at a constant value. Data from the Colorado River in Grand Canyon, Arizona, USA, and a laboratory experiment both demonstrate that when turbidity exceeds instrument-pegged conditions, increasing suspended-sediment concentration (and thus increasing turbidity) may cause optical probes to record decreasing false turbidity values that appear to be within the valid measurement range of the probe. Therefore, under high-turbidity conditions, other surrogate measurements of turbidity (e.g., acoustic-attenuation measurements or suspended-sediment samples) are necessary to correct these low false turbidity measurements and accurately measure turbidity.

  3. Technical note: False low turbidity readings from optical probes during high suspended-sediment concentrations

    USGS Publications Warehouse

    Voichick, Nicholas; Topping, David; Griffiths, Ronald

    2018-01-01

    Turbidity, a measure of water clarity, is monitored for a variety of purposes including (1) to help determine whether water is safe to drink, (2) to establish background conditions of lakes and rivers and detect pollution caused by construction projects and stormwater discharge, (3) to study sediment transport in rivers and erosion in catchments, (4) to manage siltation of water reservoirs, and (5) to establish connections with aquatic biological properties, such as primary production and predator–prey interactions. Turbidity is typically measured with an optical probe that detects light scattered from particles in the water. Probes have defined upper limits of the range of turbidity that they can measure. The general assumption is that when turbidity exceeds this upper limit, the values of turbidity will be constant, i.e., the probe is pegged; however, this assumption is not necessarily valid. In rivers with limited variation in the physical properties of the suspended sediment, at lower suspended-sediment concentrations, an increase in suspended-sediment concentration will cause a linear increase in turbidity. When the suspended-sediment concentration in these rivers is high, turbidity levels can exceed the upper measurement limit of an optical probe and record a constant pegged value. However, at extremely high suspended-sediment concentrations, optical turbidity probes do not necessarily stay pegged at a constant value. Data from the Colorado River in Grand Canyon, Arizona, USA, and a laboratory experiment both demonstrate that when turbidity exceeds instrument-pegged conditions, increasing suspended-sediment concentration (and thus increasing turbidity) may cause optical probes to record decreasing false turbidity values that appear to be within the valid measurement range of the probe. Therefore, under high-turbidity conditions, other surrogate measurements of turbidity (e.g., acoustic-attenuation measurements or suspended-sediment samples) are necessary to correct these low false turbidity measurements and accurately measure turbidity.

  4. A longitudinal study on gross motor development in children with learning disorders.

    PubMed

    Westendorp, Marieke; Hartman, Esther; Houwen, Suzanne; Huijgen, Barbara C H; Smith, Joanne; Visscher, Chris

    2014-02-01

    This longitudinal study examined the development of gross motor skills, and sex-differences therein, in 7- to 11-years-old children with learning disorders (LD) and compared the results with typically developing children to determine the performance level of children with LD. In children with LD (n=56; 39 boys, 17 girls), gross motor skills were assessed with the Test of Gross Motor Development-2 and measured annually during a 3-year period. Motor scores of 253 typically developing children (125 boys, 112 girls) were collected for references values. The multilevel analyses showed that the ball skills of children with LD improved with age (p<.001), especially between 7 and 9 years, but the locomotor skills did not (p=.50). Boys had higher ball skill scores than girls (p=.002) and these differences were constant over time. Typically developing children outperformed the children with LD on the locomotor skills and ball skills at all ages, except the locomotor skills at age 7. Children with LD develop their ball skills later in the primary school-period compared to typically developing peers. However, 11 year-old children with LD had a lag in locomotor skills and ball skills of at least four and three years, respectively, compared to their peers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Validity and reliability of the Hexoskin® wearable biometric vest during maximal aerobic power testing in elite cyclists.

    PubMed

    Elliot, Catherine A; Hamlin, Michael J; Lizamore, Catherine A

    2017-07-28

    The purpose of this study was to investigate the validity and reliability of the Hexoskin® vest for measuring respiration and heart rate (HR) in elite cyclists during a progressive test to exhaustion. Ten male elite cyclists (age 28.8 ± 12.5 yr, height 179.3 ± 6.0 cm, weight 73.2 ± 9.1 kg, V˙ O2max 60.7 ± 7.8 ml.kg.min mean ± SD) conducted a maximal aerobic cycle ergometer test using a ramped protocol (starting at 100W with 25W increments each min to failure) during two separate occasions over a 3-4 day period. Compared to the criterion measure (Metamax 3B) the Hexoskin® vest showed mainly small typical errors (1.3-6.2%) for HR and breathing frequency (f), but larger typical errors (9.5-19.6%) for minute ventilation (V˙E) during the progressive test to exhaustion. The typical error indicating the reliability of the Hexoskin® vest at moderate intensity exercise between tests was small for HR (2.6-2.9%) and f (2.5-3.2%) but slightly larger for V˙E (5.3-7.9%). We conclude that the Hexoskin® vest is sufficiently valid and reliable for measurements of HR and f in elite athletes during high intensity cycling but the calculated V˙E value the Hexoskin® vest produces during such exercise should be used with caution due to the lower validity and reliability of this variable.

  6. Optimizing Vowel Formant Measurements in Four Acoustic Analysis Systems for Diverse Speaker Groups

    PubMed Central

    Derdemezis, Ekaterini; Kent, Ray D.; Fourakis, Marios; Reinicke, Emily L.; Bolt, Daniel M.

    2016-01-01

    Purpose This study systematically assessed the effects of select linear predictive coding (LPC) analysis parameter manipulations on vowel formant measurements for diverse speaker groups using 4 trademarked Speech Acoustic Analysis Software Packages (SAASPs): CSL, Praat, TF32, and WaveSurfer. Method Productions of 4 words containing the corner vowels were recorded from 4 speaker groups with typical development (male and female adults and male and female children) and 4 speaker groups with Down syndrome (male and female adults and male and female children). Formant frequencies were determined from manual measurements using a consensus analysis procedure to establish formant reference values, and from the 4 SAASPs (using both the default analysis parameters and with adjustments or manipulations to select parameters). Smaller differences between values obtained from the SAASPs and the consensus analysis implied more optimal analysis parameter settings. Results Manipulations of default analysis parameters in CSL, Praat, and TF32 yielded more accurate formant measurements, though the benefit was not uniform across speaker groups and formants. In WaveSurfer, manipulations did not improve formant measurements. Conclusions The effects of analysis parameter manipulations on accuracy of formant-frequency measurements varied by SAASP, speaker group, and formant. The information from this study helps to guide clinical and research applications of SAASPs. PMID:26501214

  7. Evidence-based ethics? On evidence-based practice and the "empirical turn" from normative bioethics

    PubMed Central

    Goldenberg, Maya J

    2005-01-01

    Background The increase in empirical methods of research in bioethics over the last two decades is typically perceived as a welcomed broadening of the discipline, with increased integration of social and life scientists into the field and ethics consultants into the clinical setting, however it also represents a loss of confidence in the typical normative and analytic methods of bioethics. Discussion The recent incipiency of "Evidence-Based Ethics" attests to this phenomenon and should be rejected as a solution to the current ambivalence toward the normative resolution of moral problems in a pluralistic society. While "evidence-based" is typically read in medicine and other life and social sciences as the empirically-adequate standard of reasonable practice and a means for increasing certainty, I propose that the evidence-based movement in fact gains consensus by displacing normative discourse with aggregate or statistically-derived empirical evidence as the "bottom line". Therefore, along with wavering on the fact/value distinction, evidence-based ethics threatens bioethics' normative mandate. The appeal of the evidence-based approach is that it offers a means of negotiating the demands of moral pluralism. Rather than appealing to explicit values that are likely not shared by all, "the evidence" is proposed to adjudicate between competing claims. Quantified measures are notably more "neutral" and democratic than liberal markers like "species normal functioning". Yet the positivist notion that claims stand or fall in light of the evidence is untenable; furthermore, the legacy of positivism entails the quieting of empirically non-verifiable (or at least non-falsifiable) considerations like moral claims and judgments. As a result, evidence-based ethics proposes to operate with the implicit normativity that accompanies the production and presentation of all biomedical and scientific facts unchecked. Summary The "empirical turn" in bioethics signals a need for reconsideration of the methods used for moral evaluation and resolution, however the options should not include obscuring normative content by seemingly neutral technical measure. PMID:16277663

  8. Improving quantum state transfer efficiency and entanglement distribution in binary tree spin network through incomplete collapsing measurements

    NASA Astrophysics Data System (ADS)

    Behzadi, Naghi; Ahansaz, Bahram

    2018-04-01

    We propose a mechanism for quantum state transfer (QST) over a binary tree spin network on the basis of incomplete collapsing measurements. To this aim, we perform initially a weak measurement (WM) on the central qubit of the binary tree network where the state of our concern has been prepared on that qubit. After the time evolution of the whole system, a quantum measurement reversal (QMR) is performed on a chosen target qubit. By taking optimal value for the strength of QMR, it is shown that the QST quality from the sending qubit to any typical target qubit on the binary tree is considerably improved in terms of the WM strength. Also, we show that how high-quality entanglement distribution over the binary tree network is achievable by using this approach.

  9. Linking nursing unit's culture to organizational effectiveness: a measurement tool.

    PubMed

    Casida, Jesus

    2008-01-01

    Organizational culture consists of the deep underlying assumptions, beliefs, and values that are shared by members of the organization and typically operate unconsciously. The four organizational culture traits of the Denison Organizational Culture Model (DOCM) are characteristics of organizational effectiveness, which include adaptability, involvement, consistency, and mission. Effective organizations demonstrate high levels of the four cultural traits which reflect their ability to balance the dynamic tension between the need for stability and the need for flexibility within the organization. The Denison Organizational Culture Survey (DOCS) is a measurement tool that was founded on the theoretical framework of the DOCM, and in the field of business, is one of the most commonly used tools for measuring organizational culture. The DOCS offers a promising approach to operationalizing and measuring the link between organizational culture and organizational effectiveness in the context of nursing units.

  10. Development of an Austenitization Kinetics Model for 22MnB5 Steel

    NASA Astrophysics Data System (ADS)

    Di Ciano, M.; Field, N.; Wells, M. A.; Daun, K. J.

    2018-03-01

    This paper presents a first-order austenitization kinetics model for 22MnB5 steel, commonly used in hot forming die quenching. Model parameters are derived from constant heating rate dilatometry measurements. Vickers hardness measurements made on coupons that were quenched at intermediate stages of the process were used to verify the model, and the Ac 1 and Ac 3 temperatures inferred from dilatometry are consistent with correlations found in the literature. The austenitization model was extended to consider non-constant heating rates typical of industrial furnaces and again showed reasonable agreement between predictions and measurements. Finally, the model is used to predict latent heat evolution during industrial heating and is shown to be consistent with values inferred from thermocouple measurements of furnace-heated 22MnB5 coupons reported in the literature.

  11. An upgraded interferometer-polarimeter system for broadband fluctuation measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parke, E., E-mail: eparke@ucla.edu; Ding, W. X.; Brower, D. L.

    2016-11-15

    Measuring high-frequency fluctuations (above tearing mode frequencies) is important for diagnosing instabilities and transport phenomena. The Madison Symmetric Torus interferometer-polarimeter system has been upgraded to utilize improved planar-diode mixer technology. The new mixers reduce phase noise and allow more sensitive measurements of fluctuations at high frequency. Typical polarimeter rms phase noise values of 0.05°–0.07° are obtained with 400 kHz bandwidth. The low phase noise enables the resolution of fluctuations up to 250 kHz for polarimetry and 600 kHz for interferometry. The importance of probe beam alignment for polarimetry is also verified; previously reported tolerances of ≤0.1 mm displacement for equilibriummore » and tearing mode measurements minimize contamination due to spatial misalignment to within acceptable levels for chords near the magnetic axis.« less

  12. A system for verifying models and classification maps by extraction of information from a variety of data sources

    NASA Technical Reports Server (NTRS)

    Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.

    1992-01-01

    Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).

  13. Effect of temper rolling on the bake-hardening behavior of low carbon steel

    NASA Astrophysics Data System (ADS)

    Kuang, Chun-fu; Zhang, Shen-gen; Li, Jun; Wang, Jian; Li, Pei

    2015-01-01

    In a typical process, low carbon steel was annealed at two different temperatures (660°C and 750°C), and then was temper rolled to improve the mechanical properties. Pre-straining and baking treatments were subsequently carried out to measure the bake-hardening (BH) values. The influences of annealing temperature and temper rolling on the BH behavior of the steel were investigated. The results indicated that the microstructure evolution during temper rolling was related to carbon atoms and dislocations. After an apparent increase, the BH value of the steel significantly decreased when the temper rolling reduction was increased from 0% to 5%. This was attributed to the increase in solute carbon concentration and dislocation density. The maximum BH values of the steel annealed at 660°C and 750°C were 80 MPa and 89 MPa at the reductions of 3% and 4%, respectively. Moreover, increasing the annealing temperature from 660 to 750°C resulted in an obvious increase in the BH value due to carbide dissolution.

  14. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  15. Use of the 'real-ear to dial difference' to derive real-ear SPL from hearing level obtained with insert earphones.

    PubMed

    Munro, K J; Lazenby, A

    2001-10-01

    The electroacoustic characteristics of a hearing instrument are normally selected for individuals using data obtained during audiological assessment. The precise inter-relationship between the electroacoustic and audiometric variables is most readily appreciated when they have been measured at the same reference point, such as the tympanic membrane. However, it is not always possible to obtain the real-ear sound pressure level (SPL) directly if this is below the noise floor of the probe-tube microphone system or if the subject is unco-operative. The real-ear SPL may be derived by adding the subject's real-ear to dial difference (REDD) acoustic transform to the audiometer dial setting. The aim of the present study was to confirm the validity of the Audioscan RM500 to measure the REDD with the ER-3A insert earphone. A probe-tube microphone was used to measure the real-ear SPL and REDD from the right ears of 16 adult subjects ranging in age from 22 to 41 years (mean age 27 years). Measurements were made from 0.25 kHz to 6 kHz at a dial setting of 70 dB with an ER-3A insert earphone and two earmould configurations: the EAR-LINK foam ear-tip and the subjects' customized skeleton earmoulds. Mean REDD varied as a function of frequency but was typically approximately 12 dB with a standard deviation (SD) of +/- 1.7 dB and +/- 2.7 dB for the foam ear-tip and customized earmould, respectively. The mean test-retest difference of the REDD varied with frequency but was typically 0.5 dB (SD 1 dB). Over the frequency range 0.5-4 kHz, the derived values were found to be within 5 dB of the measured values in 95% of subjects when using the EAR-LINK foam ear-tip and within 4 dB when using the skeleton earmould. The individually measured REDD transform can be used in clinical practice to derive a valid estimate of real-ear SPL when it has not been possible to measure this directly.

  16. Quartz crystal resonator g sensitivity measurement methods and recent results

    NASA Astrophysics Data System (ADS)

    Driscoll, M. M.

    1990-09-01

    A technique for accurate measurements of quartz crystal resonator vibration sensitivity is described. The technique utilizes a crystal oscillator circuit in which a prescribed length of coaxial cable is used to connect the resonator to the oscillator sustaining stage. A method is provided for determination and removal of measurement errors normally introduced as a result of cable vibration. In addition to oscillator-type measurements, it is also possible to perform similar vibration sensitivity measurements using a synthesized signal generator with the resonator installed in a passive phase bridge. Test results are reported for 40 and 50 MHz, fifth overtone AT-cut, and third overtone SC-cut crystals. Acceleration sensitivity (gamma vector) values for the SC-cut resonators were typically four times smaller (5 x 10 to the -10th/g) than for the AT-cut units. However, smaller unit-to-unit gamma vector magnitude variation was exhibited by the AT-cut resonators.

  17. Measurements of vector fields with diode array

    NASA Technical Reports Server (NTRS)

    Wiehr, E. J.; Scholiers, W.

    1985-01-01

    A polarimeter was designed for high spatial and spectral resolution. It consists of a quarter-wave plate alternately operating in two positions for Stoke-V measurements and an additional quarter-wave plate for Stokes-U and -Q measurements. The spatial range covers 75 arcsec, the spectral window of about 1.8 a allows the simultaneous observations of neighboring lines. The block diagram of the data processing and acquisition system consists of five memories each one having a capacity of 10 to the 4th power 16-bit words. The total time to acquire profiles of Stokes parameters can be chosen by selecting the number of successive measurements added in the memories, each individual measurement corresponding to an integration time of 0.5 sec. Typical values range between 2 and 60 sec depending on the brightness of the structure, the amount of polarization and a compromise between desired signal-to-noise ratio and spatial resolution.

  18. Online evolution reconstruction from a single measurement record with random time intervals for quantum communication

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Su, Yang; Wang, Rong; Zhu, Yong; Shen, Huiping; Pu, Tao; Wu, Chuanxin; Zhao, Jiyong; Zhang, Baofu; Xu, Zhiyong

    2017-10-01

    Online reconstruction of a time-variant quantum state from the encoding/decoding results of quantum communication is addressed by developing a method of evolution reconstruction from a single measurement record with random time intervals. A time-variant two-dimensional state is reconstructed on the basis of recovering its expectation value functions of three nonorthogonal projectors from a random single measurement record, which is composed from the discarded qubits of the six-state protocol. The simulated results prove that our method is robust to typical metro quantum channels. Our work extends the Fourier-based method of evolution reconstruction from the version for a regular single measurement record with equal time intervals to a unified one, which can be applied to arbitrary single measurement records. The proposed protocol of evolution reconstruction runs concurrently with the one of quantum communication, which can facilitate the online quantum tomography.

  19. Small Sample Reactivity Measurements in the RRR/SEG Facility: Reanalysis using TRIPOLI-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummel, Andrew; Palmiotti, Guiseppe

    2016-08-01

    This work involved reanalyzing the RRR/SEG integral experiments performed at the Rossendorf facility in Germany throughout the 1970s and 80s. These small sample reactivity worth measurements were carried out using the pile oscillator technique for many different fission products, structural materials, and standards. The coupled fast-thermal system was designed such that the measurements would provide insight into elemental data, specifically the competing effects between neutron capture and scatter. Comparing the measured to calculated reactivity values can then provide adjustment criteria to ultimately improve nuclear data for fast reactor designs. Due to the extremely small reactivity effects measured (typically less thanmore » 1 pcm) and the specific heterogeneity of the core, the tool chosen for this analysis was TRIPOLI-4. This code allows for high fidelity 3-dimensional geometric modeling, and the most recent, unreleased version, is capable of exact perturbation theory.« less

  20. Type-I superconductivity in YbSb2 single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Liang L.; Lausberg, Stefan; Kim, Hyunsoo

    2012-06-25

    We present evidence of type-I superconductivity in YbSb2 single crystals from dc and ac magnetization, heat capacity, and resistivity measurements. The critical temperature and critical field are determined to be Tc≈ 1.3 K and Hc≈ 55 Oe. A small Ginzburg-Landau parameter κ= 0.05, together with typical magnetization isotherms of type-I superconductors, small critical field values, a strong differential paramagnetic effect signal, and a field-induced change from second- to first-order phase transition, confirms the type-I nature of the superconductivity in YbSb2. A possible second superconducting state is observed in the radio-frequency susceptibility measurements, with Tc(2)≈ 0.41 K and Hc(2)≈ 430 Oe.

  1. RCT: Module 2.06, Air Sampling Program and Methods, Course 8772

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillmer, Kurt T.

    The inhalation of radioactive particles is the largest cause of an internal radiation dose. Airborne radioactivity measurements are necessary to ensure that the control measures are and continue to be effective. Regulations govern the allowable effective dose equivalent to an individual. The effective dose equivalent is determined by combining the external and internal dose equivalent values. Typically, airborne radioactivity levels are maintained well below allowable levels to keep the total effective dose equivalent small. This course will prepare the student with the skills necessary for RCT qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examinationmore » (TEST 27566) and will provide in-the-field skills.« less

  2. RHIC ABORT KICKER WITH REDUCED COUPLING IMPEDANCE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HAHN,H.; DAVINO,D.

    2002-06-02

    Kicker magnets typically represent the most important contributors to the transverse impedance budget of accelerators and storage rings. Methods of reducing the impedance value of the SNS extraction kicker presently under construction and, in view of a future performance upgrade, that of the RHIC abort kicker have been thoroughly studied at this laboratory. In this paper, the investigation of a potential improvement from using ferrite different from the BNL standard CMD5005 is reported. Permeability measurements of several ferrite types have been performed. Measurements on two kicker magnets using CMD5005 and C2050 suggest that the impedance of a magnet without externalmore » resistive damping, such as the RHIC abort kicker, would benefit.« less

  3. A dynamic Thurstonian item response theory of motive expression in the picture story exercise: solving the internal consistency paradox of the PSE.

    PubMed

    Lang, Jonas W B

    2014-07-01

    The measurement of implicit or unconscious motives using the picture story exercise (PSE) has long been a target of debate in the psychological literature. Most debates have centered on the apparent paradox that PSE measures of implicit motives typically show low internal consistency reliability on common indices like Cronbach's alpha but nevertheless predict behavioral outcomes. I describe a dynamic Thurstonian item response theory (IRT) model that builds on dynamic system theories of motivation, theorizing on the PSE response process, and recent advancements in Thurstonian IRT modeling of choice data. To assess the models' capability to explain the internal consistency paradox, I first fitted the model to archival data (Gurin, Veroff, & Feld, 1957) and then simulated data based on bias-corrected model estimates from the real data. Simulation results revealed that the average squared correlation reliability for the motives in the Thurstonian IRT model was .74 and that Cronbach's alpha values were similar to the real data (<.35). These findings suggest that PSE motive measures have long been reliable and increase the scientific value of extant evidence from motivational research using PSE motive measures. (c) 2014 APA, all rights reserved.

  4. Measurement of liquid film thickness by optical fluorescence and its application to an oscillating piston positive displacement flowmeter

    NASA Astrophysics Data System (ADS)

    Morton, Charlotte E.; Baker, Roger C.; Hutchings, Ian M.

    2011-12-01

    The movement of the circular piston in an oscillating piston positive displacement flowmeter is important in understanding the operation of the flowmeter, and the leakage of liquid past the piston plays a key role in the performance of the meter. The clearances between the piston and the chamber are small, typically less than 60 µm. In order to measure this film thickness a fluorescent dye was added to the water passing through the meter, which was illuminated with UV light. Visible light images were captured with a digital camera and analysed to give a measure of the film thickness with an uncertainty of less than 7%. It is known that this method lacks precision unless careful calibration is undertaken. Methods to achieve this are discussed in the paper. The grey level values for a range of film thicknesses were calibrated in situ with six dye concentrations to select the most appropriate one for the range of liquid film thickness. Data obtained for the oscillating piston flowmeter demonstrate the value of the fluorescence technique. The method is useful, inexpensive and straightforward and can be extended to other applications where measurement of liquid film thickness is required.

  5. A field technique for estimating aquifer parameters using flow log data

    USGS Publications Warehouse

    Paillet, Frederick L.

    2000-01-01

    A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that systematically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that symmetrically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.

  6. Integrated optical sensing of dissolved oxygen in microtiter plates: a novel tool for microbial cultivation.

    PubMed

    John, Gernot T; Klimant, Ingo; Wittmann, Christoph; Heinzle, Elmar

    2003-03-30

    Microtiter plates with integrated optical sensing of dissolved oxygen were developed by immobilization of two fluorophores at the bottom of 96-well polystyrene microtiter plates. The oxygen-sensitive fluorophore responded to dissolved oxygen concentration, whereas the oxygen-insensitive one served as an internal reference. The sensor measured dissolved oxygen accurately in optically well-defined media. Oxygen transfer coefficients, k(L)a, were determined by a dynamic method in a commercial microtiter plate reader with an integrated shaker. For this purpose, the dissolved oxygen was initially depleted by the addition of sodium dithionite and, by oxygen transfer from air, it increased again after complete oxidation of dithionite. k(L)a values in one commercial reader were about 10 to 40 h(-1). k(L)a values were inversely proportional to the filling volume and increased with increasing shaking intensity. Dissolved oxygen was monitored during cultivation of Corynebacterium glutamicum in another reader that allowed much higher shaking intensity. Growth rates determined from optical density measurement were identical to those observed in shaking flasks and in a stirred fermentor. Oxygen uptake rates measured in the stirred fermentor and dissolved oxygen concentrations measured during cultivation in the microtiter plate were used to estimate k(L)a values in a 96-well microtiter plate. The resulting values were about 130 h(-1), which is in the lower range of typical stirred fermentors. The resulting maximum oxygen transfer rate was 26 mM h(-1). Simulations showed that the errors caused by the intermittent measurement method were insignificant under the prevailing conditions. Copyright 2003 Wiley Periodicals, Inc. Biotechnol Bioeng 81: 829-836, 2003.

  7. Alternative metrics for real-ear-to-coupler difference average values in children.

    PubMed

    Blumsack, Judith T; Clark-Lewis, Sandra; Watts, Kelli M; Wilson, Martha W; Ross, Margaret E; Soles, Lindsey; Ennis, Cydney

    2014-10-01

    Ideally, individual real-ear-to-coupler difference (RECD) measurements are obtained for pediatric hearing instrument-fitting purposes. When RECD measurements cannot be obtained, age-related average RECDs based on typically developing North American children are used. Evidence suggests that these values may not be appropriate for populations of children with retarded growth patterns. The purpose of this study was to determine if another metric, such as head circumference, height, or weight, can be used for prediction of RECDs in children. Design was a correlational study. For all participants, RECD values in both ears, head circumference, height, and weight were measured. The sample consisted of 68 North American children (ages 3-11 yr). Height, weight, head circumference, and RECDs were measured and were analyzed for both ears at 500, 750, 1000, 1500, 2000, 3000, 4000, and 6000 Hz. A backward elimination multiple-regression analysis was used to determine if age, height, weight, and/or head circumference are significant predictors of RECDs. For the left ear, head circumference was retained as the only statistically significant variable in the final model. For the right ear, head circumference was retained as the only statistically significant independent variable at all frequencies except at 2000 and 4000 Hz. At these latter frequencies, weight was retained as the only statistically significant independent variable after all other variables were eliminated. Head circumference can be considered as a metric for RECD prediction in children when individual measurements cannot be obtained. In developing countries where equipment is often unavailable and stunted growth can reduce the value of using age as a metric, head circumference can be considered as an alternative metric in the prediction of RECDs. American Academy of Audiology.

  8. Design and evaluation of a sensor fail-operational control system for a digitally controlled turbofan engine

    NASA Technical Reports Server (NTRS)

    Hrach, F. J.; Arpasi, D. J.; Bruton, W. M.

    1975-01-01

    A self-learning, sensor fail-operational, control system for the TF30-P-3 afterburning turbofan engine was designed and evaluated. The sensor fail-operational control system includes a digital computer program designed to operate in conjunction with the standard TF30-P-3 bill-of-materials control. Four engine measurements and two compressor face measurements are tested. If any engine measurements are found to have failed, they are replaced by values synthesized from computer-stored information. The control system was evaluated by using a realtime, nonlinear, hybrid computer engine simulation at sea level static condition, at a typical cruise condition, and at several extreme flight conditions. Results indicate that the addition of such a system can improve the reliability of an engine digital control system.

  9. Ionization-chamber smoke detector system

    DOEpatents

    Roe, Robert F.

    1976-10-19

    This invention relates to an improved smoke-detection system of the ionization-chamber type. In the preferred embodiment, the system utilizes a conventional detector head comprising a measuring ionization chamber, a reference ionization chamber, and a normally non-conductive gas triode for discharging when a threshold concentration of airborne particulates is present in the measuring chamber. The improved system is designed to reduce false alarms caused by fluctuations in ambient temperature. Means are provided for periodically firing the gas discharge triode and each time recording the triggering voltage required. A computer compares each triggering voltage with its predecessor. The computer is programmed to energize an alarm if the difference between the two compared voltages is a relatively large value indicative of particulates in the measuring chamber and to disregard smaller differences typically resulting from changes in ambient temperature.

  10. Physical Activity in Vietnam: Estimates and Measurement Issues

    PubMed Central

    Bui, Tan Van; Blizzard, Christopher Leigh; Luong, Khue Ngoc; Truong, Ngoc Le Van; Tran, Bao Quoc; Otahal, Petr; Srikanth, Velandai; Nelson, Mark Raymond; Au, Thuy Bich; Ha, Son Thai; Phung, Hai Ngoc; Tran, Mai Hoang; Callisaya, Michele; Gall, Seana

    2015-01-01

    Introduction Our aims were to provide the first national estimates of physical activity (PA) for Vietnam, and to investigate issues affecting their accuracy. Methods Measurements were made using the Global Physical Activity Questionnaire (GPAQ) on a nationally-representative sample of 14706 participants (46.5% males, response 64.1%) aged 25−64 years selected by multi-stage stratified cluster sampling. Results Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men) and 69.1% (women) met WHO recommendations for PA by adults for their age. On average, 52.0 (men) and 28.0 (women) Metabolic Equivalent Task (MET)-hours/week (largely from work activities) were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added) was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes. Conclusions Around seven-in-ten Vietnamese people aged 25–64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses. PMID:26485044

  11. Oscillating and pulsed gradient diffusion magnetic resonance microscopy over an extended b-value range: implications for the characterization of tissue microstructure.

    PubMed

    Portnoy, S; Flint, J J; Blackband, S J; Stanisz, G J

    2013-04-01

    Oscillating gradient spin-echo (OGSE) pulse sequences have been proposed for acquiring diffusion data with very short diffusion times, which probe tissue structure at the subcellular scale. OGSE sequences are an alternative to pulsed gradient spin echo measurements, which typically probe longer diffusion times due to gradient limitations. In this investigation, a high-strength (6600 G/cm) gradient designed for small-sample microscopy was used to acquire OGSE and pulsed gradient spin echo data in a rat hippocampal specimen at microscopic resolution. Measurements covered a broad range of diffusion times (TDeff = 1.2-15.0 ms), frequencies (ω = 67-1000 Hz), and b-values (b = 0-3.2 ms/μm2). Variations in apparent diffusion coefficient with frequency and diffusion time provided microstructural information at a scale much smaller than the imaging resolution. For a more direct comparison of the techniques, OGSE and pulsed gradient spin echo data were acquired with similar effective diffusion times. Measurements with similar TDeff were consistent at low b-value (b < 1 ms/μm(2) ), but diverged at higher b-values. Experimental observations suggest that the effective diffusion time can be helpful in the interpretation of low b-value OGSE data. However, caution is required at higher b, where enhanced sensitivity to restriction and exchange render the effective diffusion time an unsuitable representation. Oscillating and pulsed gradient diffusion techniques offer unique, complementary information. In combination, the two methods provide a powerful tool for characterizing complex diffusion within biological tissues. Copyright © 2012 Wiley Periodicals, Inc.

  12. Image quality, meteorological optical range, and fog particulate number evaluation using the Sandia National Laboratories fog chamber

    DOE PAGES

    Birch, Gabriel C.; Woo, Bryana L.; Sanchez, Andres L.; ...

    2017-08-24

    The evaluation of optical system performance in fog conditions typically requires field testing. This can be challenging due to the unpredictable nature of fog generation and the temporal and spatial nonuniformity of the phenomenon itself. We describe the Sandia National Laboratories fog chamber, a new test facility that enables the repeatable generation of fog within a 55 m×3 m×3 m (L×W×H) environment, and demonstrate the fog chamber through a series of optical tests. These tests are performed to evaluate system image quality, determine meteorological optical range (MOR), and measure the number of particles in the atmosphere. Relationships between typical opticalmore » quality metrics, MOR values, and total number of fog particles are described using the data obtained from the fog chamber and repeated over a series of three tests.« less

  13. Single transmission line interrogated multiple channel data acquisition system

    DOEpatents

    Fasching, George E.; Keech, Jr., Thomas W.

    1980-01-01

    A single transmission line interrogated multiple channel data acquisition system is provided in which a plurality of remote station/sensor circuits each monitors a specific process variable and each transmits measurement values over a single transmission line to a master interrogating station when addressed by said master interrogating station. Typically, as many as 330 remote stations may be parallel connected to the transmission line which may exceed 7,000 feet. The interrogation rate is typically 330 stations/second. The master interrogating station samples each station according to a shared, charging transmit-receive cycle. All remote station address signals, all data signals from the remote stations/sensors and all power for all of the remote station/sensors are transmitted via a single continuous terminated coaxial cable. A means is provided for periodically and remotely calibrating all remote sensors for zero and span. A provision is available to remotely disconnect any selected sensor station from the main transmission line.

  14. Image quality, meteorological optical range, and fog particulate number evaluation using the Sandia National Laboratories fog chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birch, Gabriel C.; Woo, Bryana L.; Sanchez, Andres L.

    The evaluation of optical system performance in fog conditions typically requires field testing. This can be challenging due to the unpredictable nature of fog generation and the temporal and spatial nonuniformity of the phenomenon itself. We describe the Sandia National Laboratories fog chamber, a new test facility that enables the repeatable generation of fog within a 55 m×3 m×3 m (L×W×H) environment, and demonstrate the fog chamber through a series of optical tests. These tests are performed to evaluate system image quality, determine meteorological optical range (MOR), and measure the number of particles in the atmosphere. Relationships between typical opticalmore » quality metrics, MOR values, and total number of fog particles are described using the data obtained from the fog chamber and repeated over a series of three tests.« less

  15. Mean glandular dose to patients from stereotactic breast biopsy procedures.

    PubMed

    Paixão, Lucas; Chevalier, Margarita; Hurtado-Romero, Antonio E; Garayoa, Julia

    2018-06-07

    The aim of this work is to study the radiation doses delivered to a group of patients that underwent a stereotactic breast biopsy (SBB) procedure. Mean glandular doses (MGD) were estimated from the air-kerma measured at the breast surface entrance multiplying by specific conversion coefficients (DgN) that were estimated using Monte Carlo simulations. DgN were calculated for the 0º and ±15º projections used in SBB and for the particular beam quality. Data on 61 patients were collected showing that a typical SBB procedure is composed by 10 images. MGD was on average (4 ± 2) mGy with (0.38 ± 0.06) mGy per image. The use of specific conversion coefficients instead of typical DgN for mammography/tomosynthesis yields to obtain MGD values for SBB that are around a 65% lower on average. © 2018 Institute of Physics and Engineering in Medicine.

  16. A dynamic vulnerability evaluation model to smart grid for the emergency response

    NASA Astrophysics Data System (ADS)

    Yu, Zhen; Wu, Xiaowei; Fang, Diange

    2018-01-01

    Smart grid shows more significant vulnerability to natural disasters and external destroy. According to the influence characteristics of important facilities suffered from typical kinds of natural disaster and external destroy, this paper built a vulnerability evaluation index system of important facilities in smart grid based on eight typical natural disasters, including three levels of static and dynamic indicators, totally forty indicators. Then a smart grid vulnerability evaluation method was proposed based on the index system, including determining the value range of each index, classifying the evaluation grade standard and giving the evaluation process and integrated index calculation rules. Using the proposed evaluation model, it can identify the most vulnerable parts of smart grid, and then help adopting targeted emergency response measures, developing emergency plans and increasing its capacity of disaster prevention and mitigation, which guarantee its safe and stable operation.

  17. Identification of branched-chain amino acid aminotransferases active towards (R)-(+)-1-phenylethylamine among PLP fold type IV transaminases.

    PubMed

    Bezsudnova, Ekaterina Yu; Dibrova, Daria V; Nikolaeva, Alena Yu; Rakitina, Tatiana V; Popov, Vladimir O

    2018-04-10

    New class IV transaminases with activity towards L-Leu, which is typical of branched-chain amino acid aminotransferases (BCAT), and with activity towards (R)-(+)-1-phenylethylamine ((R)-PEA), which is typical of (R)-selective (R)-amine:pyruvate transaminases, were identified by bioinformatics analysis, obtained in recombinant form, and analyzed. The values of catalytic activities in the reaction with L-Leu and (R)-PEA are comparable to those measured for characteristic transaminases with the corresponding specificity. Earlier, (R)-selective class IV transaminases were found to be active, apart from (R)-PEA, only with some other (R)-primary amines and D-amino acids. Sequences encoding new transaminases with mixed type of activity were found by searching for changes in the conserved motifs of sequences of BCAT by different bioinformatics tools. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Bias Reduction in Short Records of Satellite Soil Moisture

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; Koster, Randal D.

    2004-01-01

    Although surface soil moisture data from different sources (satellite retrievals, ground measurements, and land model integrations of observed meteorological forcing data) have been shown to contain consistent and useful information in their seasonal cycle and anomaly signals, they typically exhibit very different mean values and variability. These biases pose a severe obstacle to exploiting the useful information contained in satellite retrievals through data assimilation. A simple method of bias removal is to match the cumulative distribution functions (cdf) of the satellite and model data. However, accurate cdf estimation typically requires a long record of satellite data. We demonstrate here that by wing spatial sampling with a 2 degree moving window we can obtain local statistics based on a one-year satellite record that are a good approximation to those that would be derived from a much longer time series. This result should increase the usefulness of relatively short satellite data records.

  19. Lava flow topographic measurements for radar data interpretation

    NASA Technical Reports Server (NTRS)

    Campbell, Bruce A.; Garvin, James B.

    1993-01-01

    Topographic profiles at 25- and 5-cm horizontal resolution for three sites along a lava flow on Kilauea Volcano are presented, and these data are used to illustrate techniques for surface roughness analysis. Height and slope distributions and the height autocorrelation function are evaluated as a function of varying lowpass filter wavelength for the 25-cm data. Rms slopes are found to increase rapidly with decreasing topographic scale and are typically much higher than those found by modeling of Magellan altimeter data for Venus. A more robust description of the surface roughness appears to be the ratio of rms height to surface height correlation length. For all three sites this parameter falls within the range of values typically found from model fits to Magellan altimeter waveforms. The 5-cm profile data are used to estimate the effect of small-scale roughness on quasi-specular scattering.

  20. Process Research on Polycrystalline Silicon Material (PROPSM)

    NASA Technical Reports Server (NTRS)

    Culik, J. S.

    1983-01-01

    The performance limiting mechanisms in large grain (greater than 1-2 mm in diameter) polycrystalline silicon was investigated by measuring the illuminated current voltage (I-V) characteristics of the minicell wafer set. The average short circuit current on different wafers is 3 to 14 percent lower than that of single crystal Czochralski silicon. The scatter was typically less than 3 percent. The average open circuit voltage is 20 to 60 mV less than that of single crystal silicon. The scatter in the open circuit voltage of most of the polycrystalline silicon wafers was 15 to 20 mV, although two wafers had significantly greater scatter than this value. The fill factor of both polycrystalline and single crystal silicon cells was typically in the range of 60 to 70 percent; however several polycrystalline silicon wafers have fill factor averages which are somewhat lower and have a significantly larger degree of scatter.

  1. Magnetic and dielectric properties of lunar samples

    NASA Technical Reports Server (NTRS)

    Strangway, D. W.; Pearce, G. W.; Olhoeft, G. R.

    1977-01-01

    Dielectric properties of lunar soil and rock samples showed a systematic character when careful precautions were taken to ensure there was no moisture present during measurement. The dielectric constant (K) above 100,000 Hz was directly dependent on density according to the formula K = (1.93 + or - 0.17) to the rho power where rho is the density in g/cc. The dielectric loss tangent was only slightly dependent on density and had values less than 0.005 for typical soils and 0.005 to 0.03 for typical rocks. The loss tangent appeared to be directly related to the metallic ilmenite content. It was shown that magnetic properties of lunar samples can be used to study the distribution of metallic and ferrous iron which shows systematic variations from soil type to soil type. Other magnetic characteristics can be used to determine the distribution of grain sizes.

  2. Timber value—a matter of choice: a study of how end use assumptions affect timber values.

    Treesearch

    John H. Beuter

    1971-01-01

    The relationship between estimated timber values and actual timber prices is discussed. Timber values are related to how, where, and when the timber is used. An analysis demonstrates the relative values of a typical Douglas-fir stand under assumptions about timber use.

  3. Retrieval of Ozone Column Content from Airborne Sun Photometer Measurements During SOLVE II: Comparison with SAGE III, POAM III,THOMAS and GOME Measurements. Comparison with SAGE 111, POAM 111, TOMS and GOME Measurements

    NASA Technical Reports Server (NTRS)

    Livingston, J.; Schmid, B.; Russell, P.; Eilers, J.; Kolyer, R.; Redemann, J.; Yee, J.-H.; Trepte, C.; Thomason, L.; Pitts, M.

    2003-01-01

    During the Second SAGE 111 Ozone Loss and Validation Experiment (SOLVE II), the 14- channel NASA Ames Airborne Trackmg Sunphotometer (AATS-14) was mounted on the NASA DC-8 and successfully measured spectra of total and aerosol optical depth (TOD and AOD) during the sunlit portions of eight science flights. Values of ozone column content above the aircraft have been derived from the AATS-14 data by using a linear least squares method. For each AATS-14 measured TOD spectrum, this method iteratively finds the ozone column content that yields the best match between measured and calculated TOD. The calculations assume the known Chappuis ozone band shape and a three-parameter AOD shape (quadratic in log-log space). Seven of the AATS-14 channels (each employing an interference filter with a nominal full-width at half maximum bandpass of -5 nm) are within the Chappuis band, with center wavelengths between 452.9 nm and 864.5 nm. One channel (604.4 nm) is near the peak, and three channels (499.4, 519.4 and 675.1 nm) have ozone absorption within 30-40% of that at the peak. For the typical DC-8 SOLVE II cruising altitudes of approx. 8-12 km and the background stratospheric aerosol conditions that prevailed during SOLVE 11, absorption of incoming solar radiation by ozone comprised a significant fraction of the aerosol-plus-ozone optical depth measured in the four AATS-14 channels centered between 499.4 and 675.1 nm. Typical AODs above the DC-8 ranged from 0.003-0.008 in these channels. For comparison, an ozone overburden of 0.3 atm-cm (300 DU) translates to ozone optical depths of 0.009,0.014, 0.041, and 0.012, respectively, at these same wavelengths. In this paper, we compare AATS-14 values of ozone column content with temporally and spatially near-coincident values derived from measurements acquired by the Stratospheric Aerosol and Gas Experiment III (SAGE III) and the Polar Ozone and Aerosol Measurement 111 (POAM III) satellite sensors. We also compare AATS-14 ozone retrievals during selected DC-8 latitudinal and longitudinal transects with total column ozone data acquired by the Total Ozone Mapping Spectrometer (TOMS) and the Global Ozone Monitoring Experiment (GOME) satellite sensors. To enable this comparison, the amount of ozone in the column below the aircraft is estimated by combining SAGE and/or POAM data with high resolution, fast response in-situ ozone measurements acquired during the DC-8 ascent at the start of each science flight.

  4. ASSESSMENT OF PUBLIC EXPOSURE FORM WLANS IN THE WEST BANK-PALESTINE.

    PubMed

    Lahham, Adnan; Sharabati, Afifeh; ALMasri, Hussein

    2017-11-01

    A total of 271 measurements were conducted at 69 different sites including homes, hospitals, educational institutions and other public places to assess the exposure to radiofrequency emission from wireless local area networks (WLANs). Measurements were conducted at different distances from 40 to 10 m from the access points (APs) in real life conditions using Narda SRM-3000 selective radiation meter. Three measurements modes were considered at 1 m distance from the AP which are transmit mode, idle mode, and from the client card (laptop computer). All measurements were conducted indoor in the West Bank environment. Power density levels from WLAN systems were found to vary from 0.001 to ~1.9 μW cm-2 with an average of 0.12 μW cm-2. Maximum value found was in university environment, while the minimum was found in schools. For one measurement case where the AP was 20 cm far while transmitting large files, the measured power density reached a value of ~4.5 μW cm-2. This value is however 221 times below the general public exposure limit recommended by the International Commission on Non-Ionizing Radiation Protection, which was not exceeded in any case. Measurements of power density at 1 m around the laptop resulted in less exposure than the AP in both transmit and idle modes as well. Specific absorption rate for the head of the laptop user was estimated and found to vary from 0.1 to 2 mW/kg. The frequency distribution of measured power densities follows a log-normal distribution which is generally typical in the assessment of exposure resulting from sources of radiofrequency emissions. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Rolf; Paget, Maria L.; Richman, Eric E.

    2011-03-31

    With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for allmore » equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.« less

  6. Multiverse understanding of cosmological coincidences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Hall, Lawrence J.; Nomura, Yasunori

    2009-09-15

    There is a deep cosmological mystery: although dependent on very different underlying physics, the time scales of structure formation, of galaxy cooling (both radiatively and against the CMB), and of vacuum domination do not differ by many orders of magnitude, but are all comparable to the present age of the universe. By scanning four landscape parameters simultaneously, we show that this quadruple coincidence is resolved. We assume only that the statistical distribution of parameter values in the multiverse grows towards certain catastrophic boundaries we identify, across which there are drastic regime changes. We find order-of-magnitude predictions for the cosmological constant,more » the primordial density contrast, the temperature at matter-radiation equality, the typical galaxy mass, and the age of the universe, in terms of the fine structure constant and the electron, proton and Planck masses. Our approach permits a systematic evaluation of measure proposals; with the causal patch measure, we find no runaway of the primordial density contrast and the cosmological constant to large values.« less

  7. Determination of meteor parameters using laboratory simulation techniques

    NASA Technical Reports Server (NTRS)

    Friichtenicht, J. F.; Becker, D. G.

    1973-01-01

    Atmospheric entry of meteoritic bodies is conveniently and accurately simulated in the laboratory by techniques which employ the charging and electrostatic acceleration of macroscopic solid particles. Velocities from below 10 to above 50 km/s are achieved for particle materials which are elemental meteoroid constituents or mineral compounds with characteristics similar to those of meteoritic stone. The velocity, mass, and kinetic energy of each particle are measured nondestructively, after which the particle enters a target gas region. Because of the small particle size, free molecule flow is obtained. At typical operating pressures (0.1 to 0.5 torr), complete particle ablation occurs over distances of 25 to 50 cm; the spatial extent of the atmospheric interaction phenomena is correspondingly small. Procedures have been developed for measuring the spectrum of light from luminous trails and the values of fundamental quantities defined in meteor theory. It is shown that laboratory values for iron are in excellent agreement with those for 9 to 11 km/s artificial meteors produced by rocket injection of iron bodies into the atmosphere.

  8. The Long-Term Performance of Small-Cell Batteries Without Cell-Balancing Electronics

    NASA Technical Reports Server (NTRS)

    Pearson, C.; Thwaite, C.; Curzon, D.; Rao, G.

    2006-01-01

    Tests approx.8 yrs ago showed Sony HC do not imbalance. AEA developed a theory (ESPC 2002): a) Self-discharge (SD) decreases with state-of-charge (SOC); b) Cells diverge to a state of dynamic equilibrium; c) Equilibrium spread depends on cell SD uniformity. Balancing model verified against test data. Short-term measures of SD difficult in Sony cells and very small values, depends on technique. Long-term evidence supports lower SD at low SD. Battery testing best proof of performance, typically mission specific tests.

  9. Precision measurements of solar energetic particle elemental composition

    NASA Technical Reports Server (NTRS)

    Breneman, H.; Stone, E. C.

    1985-01-01

    Using data from the Cosmic Ray Subsystem (CRS) aboard the Voyager 1 and 2 spacecraft, solar energetic particle abundances or upper limits for all elements with 3 = Z = 30 from a combined set of 10 solar flares during the 1977 to 1982 time period were determined. Statistically meaningful abundances have been determined for the first time for several rare elements including P, Cl, K, Ti and Mn, while the precision of the mean abundances for the more abundant elements has been improved by typically a factor of approximately 3 over previously reported values.

  10. Drop Calibration of Accelerometers for Shock Measurement

    DTIC Science & Technology

    2011-08-01

    important that the screen is clear, the records displayed are crisp and values are easily read. The current DSO, used within the Division, in the...Capacitor ≤ ± 0.01% ξc Tolerance of capacitor Drop Mass Reading ≤ ± 0.083 %  dm 0.1g over 120g (typically) Reference Mass Reading ≤ ± 0.1 % rm...Therefore m has uncertainty components due to rm ,  dm and ξrme. The random component is  222 dmrmm  (6.8) and once again  dsodc

  11. Multipartite nonlocality and random measurements

    NASA Astrophysics Data System (ADS)

    de Rosier, Anna; Gruca, Jacek; Parisio, Fernando; Vértesi, Tamás; Laskowski, Wiesław

    2017-07-01

    We present an exhaustive numerical analysis of violations of local realism by families of multipartite quantum states. As an indicator of nonclassicality we employ the probability of violation for randomly sampled observables. Surprisingly, it rapidly increases with the number of parties or settings and even for relatively small values local realism is violated for almost all observables. We have observed this effect to be typical in the sense that it emerged for all investigated states including some with randomly drawn coefficients. We also present the probability of violation as a witness of genuine multipartite entanglement.

  12. Impact of the 2009 Attica wild fires on the air quality in urban Athens

    NASA Astrophysics Data System (ADS)

    Amiridis, V.; Zerefos, C.; Kazadzis, S.; Gerasopoulos, E.; Eleftheratos, K.; Vrekoussis, M.; Stohl, A.; Mamouri, R. E.; Kokkalis, P.; Papayannis, A.; Eleftheriadis, K.; Diapouli, E.; Keramitsoglou, I.; Kontoes, C.; Kotroni, V.; Lagouvardos, K.; Marinou, E.; Giannakaki, E.; Kostopoulou, E.; Giannakopoulos, C.; Richter, A.; Burrows, J. P.; Mihalopoulos, N.

    2012-01-01

    At the end of August 2009, wild fires ravaged the north-eastern fringes of Athens destroying invaluable forest wealth of the Greek capital. In this work, the impact of these fires on the air quality of Athens and surface radiation levels is examined. Satellite imagery, smoke dispersion modeling and meteorological data confirm the advection of smoke under cloud-free conditions over the city of Athens. Lidar measurements showed that the smoke plume dispersed in the free troposphere and lofted over the city reaching heights between 2 and 4 km. Ground-based sunphotometric measurements showed extreme aerosol optical depth, reaching nearly 6 in the UV wavelength range, accompanied by a reduction up to 70% of solar irradiance at ground. The intensive aerosol optical properties, namely the Ångström exponent, the lidar ratio, and the single scattering albedo, showed typical values for highly absorbing fresh smoke particles. In-situ air quality measurements revealed the impact of the smoke plume down to the surface with a slight delay on both the particulate and gaseous phase. Surface aerosols increase was encountered mainly in the fine mode with prominent elevation of OC and EC levels. Photochemical processes, studied via NO x titration of O 3, were also shown to be different compared to typical urban photochemistry.

  13. Night Sky Brightness at San Pedro Martir Observatory

    NASA Astrophysics Data System (ADS)

    Plauchu-Frayn, I.; Richer, M. G.; Colorado, E.; Herrera, J.; Córdova, A.; Ceseña, U.; Ávila, F.

    2017-03-01

    We present optical UBVRI zenith night sky brightness measurements collected on 18 nights during 2013 to 2016 and SQM measurements obtained daily over 20 months during 2014 to 2016 at the Observatorio Astronómico Nacional on the Sierra San Pedro Mártir (OAN-SPM) in México. The UBVRI data is based upon CCD images obtained with the 0.84 m and 2.12 m telescopes, while the SQM data is obtained with a high-sensitivity, low-cost photometer. The typical moonless night sky brightness at zenith averaged over the whole period is U = 22.68, B = 23.10, V = 21.84, R = 21.04, I = 19.36, and SQM = 21.88 {mag} {{arcsec}}-2, once corrected for zodiacal light. We find no seasonal variation of the night sky brightness measured with the SQM. The typical night sky brightness values found at OAN-SPM are similar to those reported for other astronomical dark sites at a similar phase of the solar cycle. We find a trend of decreasing night sky brightness with decreasing solar activity during period of the observations. This trend implies that the sky has become darker by Δ U = 0.7, Δ B = 0.5, Δ V = 0.3, Δ R=0.5 mag arcsec-2 since early 2014 due to the present solar cycle.

  14. Refining Field Measurements of Methane Flux Rates from Abandoned Oil and Gas Wells

    NASA Astrophysics Data System (ADS)

    Lagron, C. S.; Kang, M.; Riqueros, N. S.; Jackson, R. B.

    2015-12-01

    Recent studies in Pennsylvania demonstrate the potential for significant methane emissions from abandoned oil and gas wells. A subset of tested wells was high emitting, with methane flux rates up to seven orders of magnitude greater than natural fluxes (up to 105 mg CH4/hour, or about 2.5LPM). These wells contribute disproportionately to the total methane emissions from abandoned oil and gas wells. The principles guiding the chamber design have been developed for lower flux rates, typically found in natural environments, and chamber design modifications may reduce uncertainty in flux rates associated with high-emitting wells. Kang et al. estimate errors of a factor of two in measured values based on previous studies. We conduct controlled releases of methane to refine error estimates and improve chamber design with a focus on high-emitters. Controlled releases of methane are conducted at 0.05 LPM, 0.50 LPM, 1.0 LPM, 2.0 LPM, 3.0 LPM, and 5.0 LPM, and at two chamber dimensions typically used in field measurements studies of abandoned wells. As most sources of error tabulated by Kang et al. tend to bias the results toward underreporting of methane emissions, a flux-targeted chamber design modification can reduce error margins and/or provide grounds for a potential upward revision of emission estimates.

  15. Application of time-division-multiplexed lasers for measurements of gas temperature and CH4 and H2O concentrations at 30 kHz in a high-pressure combustor.

    PubMed

    Caswell, Andrew W; Kraetschmer, Thilo; Rein, Keith; Sanders, Scott T; Roy, Sukesh; Shouse, Dale T; Gord, James R

    2010-09-10

    Two time-division-multiplexed (TDM) sources based on fiber Bragg gratings were applied to monitor gas temperature, H(2)O mole fraction, and CH(4) mole fraction using line-of-sight absorption spectroscopy in a practical high-pressure gas turbine combustor test article. Collectively, the two sources cycle through 14 wavelengths in the 1329-1667 nm range every 33 μs. Although it is based on absorption spectroscopy, this sensing technology is fundamentally different from typical diode-laser-based absorption sensors and has many advantages. Specifically, the TDM lasers allow efficient, flexible acquisition of discrete-wavelength information over a wide spectral range at very high speeds (typically 30 kHz) and thereby provide a multiplicity of precise data at high speeds. For the present gas turbine application, the TDM source wavelengths were chosen using simulated temperature-difference spectra. This approach is used to select TDM wavelengths that are near the optimum values for precise temperature and species-concentration measurements. The application of TDM lasers for other measurements in high-pressure, turbulent reacting flows and for two-dimensional tomographic reconstruction of the temperature and species-concentration fields is also forecast.

  16. Improved explanation of human intelligence using cortical features with second order moments and regression.

    PubMed

    Park, Hyunjin; Yang, Jin-ju; Seo, Jongbum; Choi, Yu-yong; Lee, Kun-ho; Lee, Jong-min

    2014-04-01

    Cortical features derived from magnetic resonance imaging (MRI) provide important information to account for human intelligence. Cortical thickness, surface area, sulcal depth, and mean curvature were considered to explain human intelligence. One region of interest (ROI) of a cortical structure consisting of thousands of vertices contained thousands of measurements, and typically, one mean value (first order moment), was used to represent a chosen ROI, which led to a potentially significant loss of information. We proposed a technological improvement to account for human intelligence in which a second moment (variance) in addition to the mean value was adopted to represent a chosen ROI, so that the loss of information would be less severe. Two computed moments for the chosen ROIs were analyzed with partial least squares regression (PLSR). Cortical features for 78 adults were measured and analyzed in conjunction with the full-scale intelligence quotient (FSIQ). Our results showed that 45% of the variance of the FSIQ could be explained using the combination of four cortical features using two moments per chosen ROI. Our results showed improvement over using a mean value for each ROI, which explained 37% of the variance of FSIQ using the same set of cortical measurements. Our results suggest that using additional second order moments is potentially better than using mean values of chosen ROIs for regression analysis to account for human intelligence. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. An Eye-Tracking Study of Multiple Feature Value Category Structure Learning: The Role of Unique Features

    PubMed Central

    Liu, Zhiya; Song, Xiaohong; Seger, Carol A.

    2015-01-01

    We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting. PMID:26274332

  18. An Eye-Tracking Study of Multiple Feature Value Category Structure Learning: The Role of Unique Features.

    PubMed

    Liu, Zhiya; Song, Xiaohong; Seger, Carol A

    2015-01-01

    We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting.

  19. Temperature control during regeneration of activated carbon fiber cloth with resistance-feedback.

    PubMed

    Johnsen, David L; Rood, Mark J

    2012-10-16

    Electrothermal swing adsorption (ESA) of organic compounds from gas streams with activated carbon fiber cloth (ACFC) reduces emissions to the atmosphere and recovers feedstock for reuse. Local temperature measurement (e.g., with a thermocouple) is typically used to monitor/control adsorbent regeneration cycles. Remote electrical resistance measurement is evaluated here as an alternative to local temperature measurement. ACFC resistance that was modeled based on its physical properties was within 10.5% of the measured resistance values during electrothermal heating. Resistance control was developed based on this measured relationship and used to control temperature to within 2.3% of regeneration set-point temperatures. Isobutane-laden adsorbent was then heated with resistance control. After 2 min of heating, the temperature of the adsorbent with isobutane was 13% less than the adsorbent without isobutane. This difference decreased to 2.1% after 9 min of heating, showing desorption of isobutane. An ACFC cartridge was also heated to 175 °C for 900 cycles with its resistance and adsorption capacity values remaining within 3% and 2%, respectively. This new method to control regeneration power application based on rapid sensing of the adsorbent's resistance removes the need for direct-contact temperature sensors providing a simple, cost-efficient, and long-term regeneration technique for ESA systems.

  20. Effect of body position on vocal tract acoustics: Acoustic pharyngometry and vowel formants.

    PubMed

    Vorperian, Houri K; Kurtzweil, Sara L; Fourakis, Marios; Kent, Ray D; Tillman, Katelyn K; Austin, Diane

    2015-08-01

    The anatomic basis and articulatory features of speech production are often studied with imaging studies that are typically acquired in the supine body position. It is important to determine if changes in body orientation to the gravitational field alter vocal tract dimensions and speech acoustics. The purpose of this study was to assess the effect of body position (upright versus supine) on (1) oral and pharyngeal measurements derived from acoustic pharyngometry and (2) acoustic measurements of fundamental frequency (F0) and the first four formant frequencies (F1-F4) for the quadrilateral point vowels. Data were obtained for 27 male and female participants, aged 17 to 35 yrs. Acoustic pharyngometry showed a statistically significant effect of body position on volumetric measurements, with smaller values in the supine than upright position, but no changes in length measurements. Acoustic analyses of vowels showed significantly larger values in the supine than upright position for the variables of F0, F3, and the Euclidean distance from the centroid to each corner vowel in the F1-F2-F3 space. Changes in body position affected measurements of vocal tract volume but not length. Body position also affected the aforementioned acoustic variables, but the main vowel formants were preserved.

  1. SU-E-T-272: Direct Verification of a Treatment Planning System Megavoltage Linac Beam Photon Spectra Models, and Analysis of the Effects On Patient Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leheta, D; Shvydka, D; Parsai, E

    2015-06-15

    Purpose: For the photon dose calculation Philips Pinnacle Treatment Planning System (TPS) uses collapsed cone convolution algorithm, which relies on energy spectrum of the beam in computing the scatter component. The spectrum is modeled based on Linac’s standard commissioning data and typically is not independently verified. We explored a methodology of using transmission measurements in combination with regularization data processing to unfold Linac spectra. The measured spectra were compared to those modeled by the TPS, and the effect on patient plans was evaluated. Methods: Transmission measurements were conducted in narrow-beam geometry using a standard Farmer ionization chamber. Two attenuating materialsmore » and two build -up caps, having different atomic numbers, served to enhance discrimination between absorption of low and high-energy portions of the spectra, thus improving the accuracy of the results. The data was analyzed using a regularization technique implemented through spreadsheet-based calculations. Results: The unfolded spectra were found to deviate from the TPS beam models. The effect of such deviations on treatment planning was evaluated for patient plans through dose distribution calculations with either TPS modeled or measured energy spectra. The differences were reviewed through comparison of isodose distributions, and quantified based on maximum dose values for critical structures. While in most cases no drastic differences in the calculated doses were observed, plans with deviations of 4 to 8% in the maximum dose values for critical structures were discovered. The anatomical sites with large scatter contributions are the most vulnerable to inaccuracies in the modeled spectrum. Conclusion: An independent check of the TPS model spectrum is highly desirable and should be included as part of commissioning of a new Linac. The effect is particularly important for dose calculations in high heterogeneity regions. The developed approach makes acquisition of megavoltage Linac beam spectra achievable in a typical radiation oncology clinic.« less

  2. Adjusting measured peak discharges from an urbanizing watershed to reflect a stationary land use signal

    NASA Astrophysics Data System (ADS)

    Beighley, R. Edward; Moglen, Glenn E.

    2003-04-01

    A procedure to adjust gauged streamflow data from watersheds urbanized during or after their gauging period is presented. The procedure adjusts streamflow to be representative of a fixed land use condition, which may reflect current or future development conditions. Our intent is to determine what an event resulting in a peak discharge in, for example, 1950 (i.e., before urbanization) would produce on the current urban watershed. While past approaches assumed uniform spatial and temporal changes in urbanization, this study focuses on the use of geographic information systems (GIS) based methodologies for precisely locating in space and time where land use change has occurred. This information is incorporated into a hydrologic model to simulate the change in discharge as a result of changing land use conditions. In this paper, we use historical aerial photographs, GIS linked tax-map data, and recent land use/land cover data to recreate the spatial development history of eight gauged watersheds in the Baltimore-Washington, D. C., metropolitan area. Using our procedure to determine discharge series representative of the current urban watersheds, we found that the increase of the adjusted 2-year discharge ranged from 16 to 70 percent compared with the measured annual maximum discharge series. For the 100-year discharge the adjusted values ranged from 0 to 47 percent greater than the measured values. Additionally, relationships between the increase in flood flows and four measures of urbanization (increase in urban land, decrease in forested land, increase in high-density development, and the spatial development pattern) are investigated for predicting the increase in flood flows for ungauged watersheds. Watersheds with the largest increases in flood flows typically had more extensive development in the areas far removed from the outlet. In contrast, watersheds with development located nearer to the outlet typically had the smallest increases in peak discharge.

  3. Calibration system for radon EEC measurements.

    PubMed

    Mostafa, Y A M; Vasyanovich, M; Zhukovsky, M; Zaitceva, N

    2015-06-01

    The measurement of radon equivalent equilibrium concentration (EECRn) is very simple and quick technique for the estimation of radon progeny level in dwellings or working places. The most typical methods of EECRn measurements are alpha radiometry or alpha spectrometry. In such technique, the influence of alpha particle absorption in filters and filter effectiveness should be taken into account. In the authors' work, it is demonstrated that more precise and less complicated calibration of EECRn-measuring equipment can be conducted by the use of the gamma spectrometer as a reference measuring device. It was demonstrated that for this calibration technique systematic error does not exceed 3 %. The random error of (214)Bi activity measurements is in the range 3-6 %. In general, both these errors can be decreased. The measurements of EECRn by gamma spectrometry and improved alpha radiometry are in good agreement, but the systematic shift between average values can be observed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Physiological and Performance Measures for Baseline Concussion Assessment.

    PubMed

    Dobney, Danielle M; Thomas, Scott G; Taha, Tim; Keightley, Michelle

    2017-05-17

    Baseline testing is a common strategy for concussion assessment and management. Research continues to evaluate novel measures for potential to improve baseline testing methods. The primary objective was to; 1) determine the feasibility of including physiological, neuromuscular and mood measures as part of baseline concussion testing protocol, 2) describe typical values in a varsity athlete sample, and 3) estimate the influence of concussion history on these baseline measures. Prospective observational study. University Athletic Therapy Clinic. 100 varsity athletes. Frequency and domain measures of heart rate variability (HRV), blood pressure (BP), grip strength, Profile of Mood States and the Sport Concussion Assessment Tool-2. Physiological, neuromuscular performance and mood measures were feasible at baseline. Participants with a history of two or more previous concussions displayed significantly higher diastolic blood pressure. Females reported higher total mood disturbance compared to males. Physiological and neuromuscular performance measures are safe and feasible as baseline concussion assessment outcomes. History of concussion may have an influence on diastolic blood pressure.

  5. Effects of delay and probability combinations on discounting in humans

    PubMed Central

    Cox, David J.; Dallery, Jesse

    2017-01-01

    To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n = 212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n = 98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. PMID:27498073

  6. Laboratory simulation of the effects of overburden stress on the specific storage of shallow artesian aquifers

    USGS Publications Warehouse

    Sepúlveda, Nicasio; Zack, A.L.; Krishna, J.H.; Quinones-Aponte, Vicente; Gomez-Gomez, Fernando; Morris, G.L.

    1990-01-01

    A laboratory experiment to measure the specific storage of an aquifer material was conducted. A known dead load, simulating an overburden load, was applied to a sample of completely saturated aquifer material contained inside a cylinder. After the dead load was applied, water was withdrawn from the sample, causing the hydrostatic pressure to decrease and the effective stress to increase. The resulting compression of the sample and the amount of water withdrawn were measured after equilibrium was reached. The procedure was repeated by increasing the dead load and the hydrostatic pressure followed by withdrawing water to determine new values of effective stress and compaction. The simulated dead loads are typical of those experienced by shallow artesian aquifers. The void ratio and the effective stress of the aquifer sample, as simulated by different dead loads, determine the pore volume compressibility which, in turn, determines the values of specific storage. An analytical algorithm was used to independently determine the stress dependent profile of specific storage. These values are found to be in close agreement with laboratory results. Implications for shallow artesian aquifers, with relatively small overburden stress, are also addressed.

  7. Iterative direct inversion: An exact complementary solution for inverting fault-slip data to obtain palaeostresses

    NASA Astrophysics Data System (ADS)

    Mostafa, Mostafa E.

    2005-10-01

    The present study shows that reconstructing the reduced stress tensor (RST) from the measurable fault-slip data (FSD) and the immeasurable shear stress magnitudes (SSM) is a typical iteration problem. The result of direct inversion of FSD presented by Angelier [1990. Geophysical Journal International 103, 363-376] is considered as a starting point (zero step iteration) where all SSM are assigned constant value ( λ=√{3}/2). By iteration, the SSM and RST update each other until they converge to fixed values. Angelier [1990. Geophysical Journal International 103, 363-376] designed the function upsilon ( υ) and the two estimators: relative upsilon (RUP) and (ANG) to express the divergence between the measured and calculated shear stresses. Plotting individual faults' RUP at successive iteration steps shows that they tend to zero (simulated data) or to fixed values (real data) at a rate depending on the orientation and homogeneity of the data. FSD of related origin tend to aggregate in clusters. Plots of the estimators ANG versus RUP show that by iteration, labeled data points are disposed in clusters about a straight line. These two new plots form the basis of a technique for separating FSD into homogeneous clusters.

  8. Human exposures to monomers resulting from consumer contact with polymers.

    PubMed

    Leber, A P

    2001-06-01

    Many consumer products are composed completely, or in part, of polymeric materials. Direct or indirect human contact results in potential exposures to monomers as a result of migrations of trace amounts from the polymeric matrix into foods, into the skin or other bodily surfaces. Typically, residual monomer levels in these polymers are <100 p.p.m., and represent exposures well below those observable in traditional toxicity testing. These product applications thus require alternative methods for evaluating health risks relating to monomer exposures. A typical approach includes: (a) assessment of potential human contacts for specific polymer uses; (b) utilization of data from toxicity testing of pure monomers, e.g. cancer bioassay results; and (c) mathematical risk assessment methods. Exposure potentials are measured in one of two analytical procedures: (1) migration of monomer from polymer into a simulant solvent (e.g. alcohol, acidic water, vegetable oil) appropriate for the intended use of the product (e.g. beer cans, food jars, packaging adhesive, dairy hose); or (2) total monomer content of the polymer, providing worse-case values for migratable monomer. Application of toxicity data typically involves NOEL or benchmark values for non-cancer endpoints, or tumorigenicity potencies for monomers demonstrated to be carcinogens. Risk assessments provide exposure 'safety margin' ratios between levels that: (1) are projected to be safe according to toxicity information, and (2) are potential monomer exposures posed by the intended use of the consumer product. This paper includes an example of a health risk assessment for a chewing gum polymer for which exposures to trace levels of butadiene monomer occur.

  9. Dynamical Typicality Approach to Eigenstate Thermalization

    NASA Astrophysics Data System (ADS)

    Reimann, Peter

    2018-06-01

    We consider the set of all initial states within a microcanonical energy shell of an isolated many-body quantum system, which exhibit an arbitrary but fixed nonequilibrium expectation value for some given observable A . On the condition that this set is not too small, it is shown by means of a dynamical typicality approach that most such initial states exhibit thermalization if and only if A satisfies the so-called weak eigenstate thermalization hypothesis (wETH). Here, thermalization means that the expectation value of A spends most of its time close to the microcanonical value after initial transients have died out. The wETH means that, within the energy shell, most eigenstates of the pertinent system Hamiltonian exhibit very similar expectation values of A .

  10. Mapping apparent stress and energy radiation over fault zones of major earthquakes

    USGS Publications Warehouse

    McGarr, A.; Fletcher, Joe B.

    2002-01-01

    Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.

  11. The Impact of Different Absolute Solar Irradiance Values on Current Climate Model Simulations

    NASA Technical Reports Server (NTRS)

    Rind, David H.; Lean, Judith L.; Jonas, Jeffrey

    2014-01-01

    Simulations of the preindustrial and doubled CO2 climates are made with the GISS Global Climate Middle Atmosphere Model 3 using two different estimates of the absolute solar irradiance value: a higher value measured by solar radiometers in the 1990s and a lower value measured recently by the Solar Radiation and Climate Experiment. Each of the model simulations is adjusted to achieve global energy balance; without this adjustment the difference in irradiance produces a global temperature change of 0.48C, comparable to the cooling estimated for the Maunder Minimum. The results indicate that by altering cloud cover the model properly compensates for the different absolute solar irradiance values on a global level when simulating both preindustrial and doubled CO2 climates. On a regional level, the preindustrial climate simulations and the patterns of change with doubled CO2 concentrations are again remarkably similar, but there are some differences. Using a higher absolute solar irradiance value and the requisite cloud cover affects the model's depictions of high-latitude surface air temperature, sea level pressure, and stratospheric ozone, as well as tropical precipitation. In the climate change experiments it leads to an underestimation of North Atlantic warming, reduced precipitation in the tropical western Pacific, and smaller total ozone growth at high northern latitudes. Although significant, these differences are typically modest compared with the magnitude of the regional changes expected for doubled greenhouse gas concentrations. Nevertheless, the model simulations demonstrate that achieving the highest possible fidelity when simulating regional climate change requires that climate models use as input the most accurate (lower) solar irradiance value.

  12. Measuring self-esteem in context: the importance of stability of self-esteem in psychological functioning.

    PubMed

    Kernis, Michael H

    2005-12-01

    In this article, I report on a research program that has focused on the joint roles of stability and level of self-esteem in various aspects of psychological functioning. Stability of self-esteem refers to the magnitude of short-term fluctuations that people experience in their current, contextually based feelings of self-worth. In contrast, level of self-esteem refers to representations of people's general, or typical, feelings of self-worth. A considerable amount of research reveals that self-esteem stability has predictive value beyond the predictive value of self-esteem level. Moreover, considering self-esteem stability provides one way to distinguish fragile from secure forms of high self-esteem. Results from a number of studies are presented and theoretical implications are discussed.

  13. Systems identification using a modified Newton-Raphson method: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Iliff, K. W.

    1972-01-01

    A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.

  14. Compendium of Experimental Cetane Numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanowitz, Janet; Ratcliff, Matthew A.; McCormick, Robert L.

    This report is an updated version of the 2014 Compendium of Experimental Cetane Number Data and presents a compilation of measured cetane numbers for pure chemical compounds. It includes all available single-compound cetane number data found in the scientific literature up until December 2016 as well as a number of previously unpublished values, most measured over the past decade at the National Renewable Energy Laboratory. This version of the compendium contains cetane values for 496 pure compounds, including 204 hydrocarbons and 292 oxygenates. 176 individual measurements are new to this version of the compendium, all of them collected using ASTMmore » Method D6890, which utilizes an Ignition Quality Tester (IQT) a type of constant-volume combustion chamber. For many compounds, numerous measurements are included, often collected by different researchers using different methods. The text of this document is unchanged from the 2014 version, except for the numbers of compounds in Section 3.1, the Appendices, Table 1. Primary Cetane Number Data Sources and Table 2. Number of Measurements Included in Compendium. Cetane number is a relative ranking of a fuel's autoignition characteristics for use in compression ignition engines. It is based on the amount of time between fuel injection and ignition, also known as ignition delay. The cetane number is typically measured either in a single-cylinder engine or a constant-volume combustion chamber. Values in the previous compendium derived from octane numbers have been removed and replaced with a brief analysis of the correlation between cetane numbers and octane numbers. The discussion on the accuracy and precision of the most commonly used methods for measuring cetane number has been expanded, and the data have been annotated extensively to provide additional information that will help the reader judge the relative reliability of individual results.« less

  15. Performance of an electrochemical carbon monoxide monitor in the presence of anesthetic gases.

    PubMed

    Dunning, M; Woehlck, H J

    1997-11-01

    The passage of volatile anesthetic agents through accidentally dried CO2 absorbents in anesthesia circuits can result in the chemical breakdown of anesthetics with production of greater than 10000 ppm carbon monoxide (CO). This study was designed to evaluate a portable CO monitor in the presence of volatile anesthetic agents. Two portable CO monitors employing electrochemical sensors were tested to determine the effects of anesthetic agents, gas sample flow rates, and high CO concentrations on their electrochemical sensor. The portable CO monitors were exposed to gas mixtures of 0 to 500 ppm CO in either 70% nitrous oxide, 1 MAC concentrations of contemporary volatile anesthetics, or reacted isoflurane or desflurane (containing CO and CHF3) in oxygen. The CO measurements from the electrochemical sensors were compared to simultaneously obtained samples measured by gas chromatography (GC). Data were analyzed by linear regression. Overall correlation between the portable CO monitors and the GC resulted in an r2 value >0.98 for all anesthetic agents. Sequestered samples produced an exponential decay of measured CO with time, whereas stable measurements were maintained during continuous flow across the sensor. Increasing flow rates resulted in higher CO readings. Exposing the CO sensor to 3000 and 19000 ppm CO resulted in maximum reported concentrations of approximately 1250 ppm, with a prolonged recovery. Decrease in measured concentration of the sequestered samples suggests destruction of the sample by the sensor, whereas a diffusion limitation is suggested by the dependency of measured value upon flow. Any value over 500 ppm must be assumed to represent dangerous concentrations of CO because of the non-linear response of these monitors at very high CO concentrations. These portable electrochemical CO monitors are adequate to measure CO concentrations up to 500 ppm in the presence of typical clinical concentrations of anesthetics.

  16. Textural and mineralogical study of sandstones from the onshore Gulf of Alaska Tertiary Province, southern Alaska

    USGS Publications Warehouse

    Winkler, Gary R.; McLean, Hugh; Plafker, George

    1976-01-01

    Petrographic examination of 74 outcrop samples of Paleocene through Pliocene age from the onshore Gulf of Alaska Tertiary Province indicates that sandstones of the province characteristically are texturally immature and mineralogically unstable. Diagenetic alteration of framework grains throughout the stratigraphic sequence has produced widespread zeolite cement or phyllosilicate grain coatings and pseudomatrix. Multiple deformation and deep burial of the older Tertiary sequence--the Orca Group, the shale of Haydon Peak, and the Kulthieth and Tokun Formations--caused extensive alteration and grain interpenetration, resulting in low porosity values. Less intense deformation and intermediate depth of burial of the younger Tertiary sequence--the Katalla, Poul Creek, Redwood, and Yakataga Formations--has resulted in a greater range in textural properties. Most sandstone samples in the younger Tertiary sequence are poorly sorted, tightly packed, and have strongly appressed framework grains, but some are less tightly packed and contain less matrix. Soft and mineralogically unstable framework grains have undergone considerable alteration, reducing pore space even in the youngest rocks. Measurements of porosity, permeability, grain density, and sonic velocity of outcrop samples of the younger Tertiary sequence indicate a modest up-section improvement in sandstone reservoir characteristics. Nonetheless porosity and permeability values typically are below 16 percent and 15 millidarcies respectively and grain densities are consistently high, about 2.7 gm/cc. Low permeability and porosity values, and high grain densities and sonic velocities appear to be typical of most outcrop areas throughout the onshore Gulf of Alaska Tertiary Province.

  17. Droplet characteristic measurement in Fourier interferometry imaging and behavior at the rainbow angle.

    PubMed

    Briard, Paul; Saengkaew, Sawitree; Wu, Xuecheng; Meunier-Guttin-Cluzel, Siegfried; Chen, Linghong; Cen, Kefa; Gréhan, Gérard

    2013-01-01

    This paper presents the possibility of measuring the three-dimensional (3D) relative locations and diameters of a set of spherical particles and discusses the behavior of the light recorded around the rainbow angle, an essential step toward refractive index measurements. When a set of particles is illuminated by a pulsed incident wave, the particles act as spherical light wave sources. When the pulse duration is short enough to fix the particle location (typically about 10 ns), interference fringes between these different spherical waves can be recorded. The Fourier transform of the fringes divides the complex fringe systems into a series of spots, with each spot characterizing the interference between a pair of particles. The analyses of these spots (in position and shape) potentially allow the measurement of particle characteristics (3D relative position, particle diameter, and particle refractive index value).

  18. SHORT COMMUNICATION: Time measurement device with four femtosecond stability

    NASA Astrophysics Data System (ADS)

    Panek, Petr; Prochazka, Ivan; Kodet, Jan

    2010-10-01

    We present the experimental results of extremely precise timing in the sense of time-of-arrival measurements in a local time scale. The timing device designed and constructed in our laboratory is based on a new concept using a surface acoustic wave filter as a time interpolator. Construction of the device is briefly described. The experiments described were focused on evaluating the timing precision and stability. Low-jitter test pulses with a repetition frequency of 763 Hz were generated synchronously to the local time base and their times of arrival were measured. The resulting precision of a single measurement was typically 900 fs RMS, and a timing stability TDEV of 4 fs was achieved for time intervals in the range from 300 s to 2 h. To our knowledge this is the best value reported to date for the stability of a timing device. The experimental results are discussed and possible improvements are proposed.

  19. Physical property characterization of Fe-tube encapsulated and vacuum annealed bulk MgB 2

    NASA Astrophysics Data System (ADS)

    Awana, V. P. S.; Rawat, Rajeev; Gupta, Anurag; Isobe, M.; Singh, K. P.; Vajpayee, Arpita; Kishan, H.; Takayama-Muromachi, E.; Narlikar, A. V.

    2006-08-01

    We report the phase formation, and present a detailed study of magnetization and resistivity under magnetic field of MgB 2 polycrystalline bulk samples prepared by the Fe-tube encapsulated and vacuum (10 -5 Torr) annealed (750 ∘C) route. Zero-field-cooled magnetic susceptibility (χ) measurements exhibited a sharp transition to the superconducting state with a sizeable diamagnetic signal at 39 K (Tc). The measured magnetization loops of the samples, despite the presence of flux jumps, exhibited a stable current density (Jc) of around 2.4×10 5 A/cm 2 in up to 2 T (Tesla) field and at temperatures (T) up to 10 K. The upper critical field is estimated from resistivity measurements in various fields and shows a typical value of 8 T at 21 K. Further, χ measurements at an applied field of 0.1 T reveal a paramagnetic Meissner effect (PME) that is briefly discussed.

  20. Responsivity calibration of the LoWEUS spectrometer

    DOE PAGES

    Lepson, J. K.; Beiersdorfer, P.; Kaita, R.; ...

    2016-09-02

    We performed an in situ calibration of the relative responsivity function of the Long-Wavelength Extreme Ultraviolet Spectrometer (LoWEUS), while operating on the Lithium Tokamak Experiment (LTX) at Princeton Plasma Physics Laboratory. The calibration was accomplished by measuring oxygen lines, which are typically present in LTX plasmas. The measured spectral line intensities of each oxygen charge state were then compared to the calculated emission strengths given in the CHIANTI atomic database. Normalizing the strongest line in each charge state to the CHIANTI predictions, we obtained the differences between the measured and predicted values for the relative strengths of the other linesmore » of a given charge state. We find that a 3rd degree polynomial function provides a good fit to the data points. Lastly, our measurements show that the responsivity between about 120 and 300 Å varies by factor of ~30.« less

  1. Classicality condition on a system observable in a quantum measurement and a relative-entropy conservation law

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui; Ueda, Masahito

    2015-03-01

    We consider the information flow on a system observable X corresponding to a positive-operator-valued measure under a quantum measurement process Y described by a completely positive instrument from the viewpoint of the relative entropy. We establish a sufficient condition for the relative-entropy conservation law which states that the average decrease in the relative entropy of the system observable X equals the relative entropy of the measurement outcome of Y , i.e., the information gain due to measurement. This sufficient condition is interpreted as an assumption of classicality in the sense that there exists a sufficient statistic in a joint successive measurement of Y followed by X such that the probability distribution of the statistic coincides with that of a single measurement of X for the premeasurement state. We show that in the case when X is a discrete projection-valued measure and Y is discrete, the classicality condition is equivalent to the relative-entropy conservation for arbitrary states. The general theory on the relative-entropy conservation is applied to typical quantum measurement models, namely, quantum nondemolition measurement, destructive sharp measurements on two-level systems, a photon counting, a quantum counting, homodyne and heterodyne measurements. These examples except for the nondemolition and photon-counting measurements do not satisfy the known Shannon-entropy conservation law proposed by Ban [M. Ban, J. Phys. A: Math. Gen. 32, 1643 (1999), 10.1088/0305-4470/32/9/012], implying that our approach based on the relative entropy is applicable to a wider class of quantum measurements.

  2. Dynamic assessment of school-age children's narrative ability: an experimental investigation of classification accuracy.

    PubMed

    Peña, Elizabeth D; Gillam, Ronald B; Malek, Melynn; Ruiz-Felter, Roxanna; Resendiz, Maria; Fiestas, Christine; Sabel, Tracy

    2006-10-01

    Two experiments examined reliability and classification accuracy of a narration-based dynamic assessment task. The first experiment evaluated whether parallel results were obtained from stories created in response to 2 different wordless picture books. If so, the tasks and measures would be appropriate for assessing pretest and posttest change within a dynamic assessment format. The second experiment evaluated the extent to which children with language impairments performed differently than typically developing controls on dynamic assessment of narrative language. In the first experiment, 58 1st- and 2nd-grade children told 2 stories about wordless picture books. Stories were rated on macrostructural and microstructural aspects of language form and content, and the ratings were subjected to reliability analyses. In the second experiment, 71 children participated in dynamic assessment. There were 3 phases: a pretest phase, in which children created a story that corresponded to 1 of the wordless picture books from Experiment 1; a teaching phase, in which children attended 2 short mediation sessions that focused on storytelling ability; and a posttest phase, in which children created a story that corresponded to a second wordless picture book from Experiment 1. Analyses compared the pretest and posttest stories that were told by 2 groups of children who received mediated learning (typical and language impaired groups) and a no-treatment control group of typically developing children from Experiment 1. The results of the first experiment indicated that the narrative measures applied to stories about 2 different wordless picture books had good internal consistency. In Experiment 2, typically developing children who received mediated learning demonstrated a greater amount of pretest to posttest change than children in the language impaired and control groups. Classification analysis indicated better specificity and sensitivity values for measures of response to intervention (modifiability) and posttest storytelling than for measures of pretest storytelling. Observation of modifiability was the single best indicator of language impairment. Posttest measures and modifiability together yielded no misclassifications. The first experiment supported the use of 2 wordless picture books as stimulus materials for collecting narratives before and after mediation within a dynamic assessment paradigm. The second experiment supported the use of dynamic assessment for accurately identifying language impairments in school-age children.

  3. The Ling 6(HL) test: typical pediatric performance data and clinical use evaluation.

    PubMed

    Glista, Danielle; Scollie, Susan; Moodie, Sheila; Easwar, Vijayalakshmi

    2014-01-01

    The Ling 6(HL) test offers a calibrated version of naturally produced speech sounds in dB HL for evaluation of detection thresholds. Aided performance has been previously characterized in adults. The purpose of this work was to evaluate and refine the Ling 6(HL) test for use in pediatric hearing aid outcome measurement. This work is presented across two studies incorporating an integrated knowledge translation approach in the characterization of normative and typical performance, and in the evaluation of clinical feasibility, utility, acceptability, and implementation. A total of 57 children, 28 normally hearing and 29 with binaural sensorineural hearing loss, were included in Study 1. Children wore their own hearing aids fitted using Desired Sensation Level v5.0. Nine clinicians from The Network of Pediatric Audiologists participated in Study 2. A CD-based test format was used in the collection of unaided and aided detection thresholds in laboratory and clinical settings; thresholds were measured clinically as part of routine clinical care. Confidence intervals were derived to characterize normal performance and typical aided performance according to hearing loss severity. Unaided-aided performance was analyzed using a repeated-measures analysis of variance. The audiologists completed an online questionnaire evaluating the quality, feasibility/executability, utility/comparative value/relative advantage, acceptability/applicability, and interpretability, in addition to recommendation and general comments sections. Ling 6(HL) thresholds were reliably measured with children 3-18 yr old. Normative and typical performance ranges were translated into a scoring tool for use in pediatric outcome measurement. In general, questionnaire respondents generally agreed that the Ling 6(HL) test was a high-quality outcome evaluation tool that can be implemented successfully in clinical settings. By actively collaborating with pediatric audiologists and using an integrated knowledge translation framework, this work supported the creation of an evidence-based clinical tool that has the potential to be implemented in, and useful to, clinical practice. More research is needed to characterize performance in alternative listening conditions to facilitate use with infants, for example. Future efforts focused on monitoring the use of the Ling 6(HL) test in daily clinical practice may help describe whether clinical use has been maintained across time and if any additional adaptations are necessary to facilitate clinical uptake. American Academy of Audiology.

  4. Dynamic Assessment of School-Age Children’s Narrative Ability

    PubMed Central

    Peña, Elizabeth D.; Gillam, Ronald B.; Malek, Melynn; Ruiz-Felter, Roxanna; Resendiz, Maria; Fiestas, Christine; Sabel, Tracy

    2008-01-01

    Two experiments examined reliability and classification accuracy of a narration-based dynamic assessment task. Purpose The first experiment evaluated whether parallel results were obtained from stories created in response to 2 different wordless picture books. If so, the tasks and measures would be appropriate for assessing pretest and posttest change within a dynamic assessment format. The second experiment evaluated the extent to which children with language impairments performed differently than typically developing controls on dynamic assessment of narrative language. Method In the first experiment, 58 1st- and 2nd-grade children told 2 stories about wordless picture books. Stories were rated on macrostructural and microstructural aspects of language form and content, and the ratings were subjected to reliability analyses. In the second experiment, 71 children participated in dynamic assessment. There were 3 phases: a pretest phase, in which children created a story that corresponded to 1 of the wordless picture books from Experiment 1; a teaching phase, in which children attended 2 short mediation sessions that focused on storytelling ability; and a posttest phase, in which children created a story that corresponded to a second wordless picture book from Experiment 1. Analyses compared the pretest and posttest stories that were told by 2 groups of children who received mediated learning (typical and language impaired groups) and a no-treatment control group of typically developing children from Experiment 1. Results The results of the first experiment indicated that the narrative measures applied to stories about 2 different wordless picture books had good internal consistency. In Experiment 2, typically developing children who received mediated learning demonstrated a greater amount of pretest to posttest change than children in the language impaired and control groups. Classification analysis indicated better specificity and sensitivity values for measures of response to intervention (modifiability) and posttest storytelling than for measures of pretest storytelling. Observation of modifiability was the single best indicator of language impairment. Posttest measures and modifiability together yielded no misclassifications. Conclusion The first experiment supported the use of 2 wordless picture books as stimulus materials for collecting narratives before and after mediation within a dynamic assessment paradigm. The second experiment supported the use of dynamic assessment for accurately identifying language impairments in school-age children. PMID:17077213

  5. Characterization of engineered nanoparticles in commercially available spray disinfectant products advertised to contain colloidal silver.

    PubMed

    Rogers, Kim R; Navratilova, Jana; Stefaniak, Aleksandr; Bowers, Lauren; Knepp, Alycia K; Al-Abed, Souhail R; Potter, Phillip; Gitipour, Alireza; Radwan, Islam; Nelson, Clay; Bradham, Karen D

    2018-04-01

    Given the potential for human exposure to silver nanoparticles from spray disinfectants and dietary supplements, we characterized the silver-containing nanoparticles in 22 commercial products that advertised the use of silver or colloidal silver as the active ingredient. Characterization parameters included: total silver, fractionated silver (particulate and dissolved), primary particle size distribution, hydrodynamic diameter, particle number, and plasmon resonance absorbance. A high degree of variability between claimed and measured values for total silver was observed. Only 7 of the products showed total silver concentrations within 20% of their nominally reported values. In addition, significant variations in the relative percentages of particulate vs. soluble silver were also measured in many of these products reporting to be colloidal. Primary silver particle size distributions by transmission electron microscopy (TEM) showed two populations of particles - smaller particles (<5nm) and larger particles between 20 and 40nm. Hydrodynamic diameter measurements using nanoparticle tracking analysis (NTA) correlated well with TEM analysis for the larger particles. Z-average (Z-Avg) values measured using dynamic light scattering (DLS); however, were typically larger than both NTA or TEM particle diameters. Plasmon resonance absorbance signatures (peak absorbance at around 400nm indicative of metallic silver nanoparticles) were only noted in 4 of the 9 yellow-brown colored suspensions. Although the total silver concentrations were variable among products, ranging from 0.54mg/L to 960mg/L, silver containing nanoparticles were identified in all of the product suspensions by TEM. Published by Elsevier B.V.

  6. Typical Vine or International Taste: Wine Consumers' Dilemma Between Beliefs and Preferences.

    PubMed

    Scozzafava, Gabriele; Boncinelli, Fabio; Contini, Caterina; Romano, Caterina; Gerini, Francesca; Casini, Leonardo

    2016-01-01

    The wine-growing sector is probably one of the agricultural areas where the ties between product quality and territory are most evident. Geographical indication is a key element in this context, and previous literature has focused on demonstrating how certification of origin influences the wine purchaser's behavior. However, less attention has been devoted to understanding how the value of a given name of origin may or may not be determined by the various elements that characterize the typicality of the wine product on that territory: vines, production techniques, etc. It thus seems interesting, in this framework, to evaluate the impacts of several characteristic attributes on the preferences of consumers. This paper will analyze, in particular, the role of the presence of autochthonous vines in consumers' choices. The connection between name of origin and autochthonous vines appears to be particularly important in achieving product "recognisability", while introducing "international" vines in considerable measure into blends might result in the loss of the peculiarity of certain characteristic and typical local productions. A standardization of taste could thus risk compromising the reputation of traditional production areas. The objective of this study is to estimate, through an experimental auction on the case study of Chianti, the differences in willingness to pay for wines produced with different shares of typical vines. The results show that consumers have a willingness to pay for wine produced with typical blends 34% greater than for wines with international blends. However, this difference is not confirmed by blind tasting, raising the issue of the relationship between exante expectations about vine typicality and real wine sensorial characteristics. Finally, some recent patents related to wine testing and wine packaging are reviewed.

  7. MR-based measurements and simulations of the magnetic field created by a realistic transcranial magnetic stimulation (TMS) coil and stimulator.

    PubMed

    Mandija, Stefano; Petrov, Petar I; Neggers, Sebastian F W; Luijten, Peter R; van den Berg, Cornelis A T

    2016-11-01

    Transcranial magnetic stimulation (TMS) is an emerging technique that allows non-invasive neurostimulation. However, the correct validation of electromagnetic models of typical TMS coils and the correct assessment of the incident TMS field (B TMS ) produced by standard TMS stimulators are still lacking. Such a validation can be performed by mapping B TMS produced by a realistic TMS setup. In this study, we show that MRI can provide precise quantification of the magnetic field produced by a realistic TMS coil and a clinically used TMS stimulator in the region in which neurostimulation occurs. Measurements of the phase accumulation created by TMS pulses applied during a tailored MR sequence were performed in a phantom. Dedicated hardware was developed to synchronize a typical, clinically used, TMS setup with a 3-T MR scanner. For comparison purposes, electromagnetic simulations of B TMS were performed. MR-based measurements allow the mapping and quantification of B TMS starting 2.5 cm from the TMS coil. For closer regions, the intra-voxel dephasing induced by B TMS prohibits TMS field measurements. For 1% TMS output, the maximum measured value was ~0.1 mT. Simulations reflect quantitatively the experimental data. These measurements can be used to validate electromagnetic models of TMS coils, to guide TMS coil positioning, and for dosimetry and quality assessment of concurrent TMS-MRI studies without the need for crude methods, such as motor threshold, for stimulation dose determination. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    NASA Technical Reports Server (NTRS)

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  9. Lead distribution and possible sources along vertical zone spectrum of typical ecosystems in the Gongga Mountain, eastern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Luo, Ji; Tang, Ronggui; Sun, Shouqin; Yang, Dandan; She, Jia; Yang, Peijun

    2015-08-01

    A total of 383 samples from soil, plant, litterfall and precipitation in four typical ecosystems of Gongga Mountain were collected. Pb concentrations of samples were measured and analyzed. The results showed mean Pb concentrations in different soil layers were in the order of O > A > C, and mean Pb concentrations of the aboveground parts of plant was 3.60 ± 2.54 mg kg-1, with the minimum value of 0.77 mg kg-1 and the maximum value of 10.90 mg kg-1. Pb concentrations in soil's O-horizon and A-horizon showed a downward trend with increasing elevation (the determination coefficient R2 was 0.9478, 0.7918 and 0.9759 respectively). In contrast to other soil layers, the level of Pb concentrations in O-horizon (incomplete decomposition) was significantly high. Litterfall decomposition, atmospheric deposition and the unique climate could be main factors leading high Pb accumulation in soil's O-horizon. What's more, significant correlation (R2 = 0.8126, P < 0.05) was found between Pb concentrations in fine roots and soil's A-horizon confirms that fine roots could adsorb and accumulate Pb materials in soil. In general, the fact that Pb inputted into the typical ecosystems in the Gongga Mountain via long-range transportation and deposition of the atmosphere from external Pb sources could be confirmed by the HYSPLIT model and the ratio of CPb/CAl in plants (leaves) and CPb/CAl in litterfall. The mining activities and increasing anthropogenic activities (tourism development) could be main sources of Pb in this area. In order to better understand Pb sources and eco-risks of these typical ecosystems, litterfall decomposition characteristics, biomass of productivity of forest ecosystem, Pb isotopic tracing among air mass, twigs, leaves, litterfall and O-horizon soil in this vertical belt should also be taken into consideration.

  10. Southern Clusters for Standardizing CCD Photometry

    NASA Astrophysics Data System (ADS)

    Moon, T. T.

    2017-06-01

    Standardizing photometric measurements typically involves undertaking all-sky photometry. This can be laborious and time-consuming and, for CCD photometry, particularly challenging. Transforming photometry to a standard system is, however, a crucial step when routinely measuring variable stars, as it allows photoelectric measurements from different observers to be combined. For observers in the northern hemisphere, standardized UBVRI values of stars in open clusters such as M67 and NGC 7790 have been established, greatly facilitating quick and accurate transformation of CCD measurements. Recently the AAVSO added the cluster NGC 3532 for southern hemisphere observers to similarly standardize their photometry. The availability of NGC 3532 standards was announced on the AAVSO Variable Star Observing, Photometry forum on 27 October 2016. Published photometry, along with some new measurements by the author, provide a means of checking these NGC 3532 standards which were determined through the AAVSO's Bright Star Monitor (BSM) program (see: https://www.aavso.org/aavsonet-epoch-photometry-database). New measurements of selected stars in the open clusters M25 and NGC 6067 are also included.

  11. Application of blue laser triangulation sensors for displacement measurement through fire

    NASA Astrophysics Data System (ADS)

    Hoehler, Matthew S.; Smith, Christopher M.

    2016-11-01

    This paper explores the use of blue laser triangulation sensors to measure displacement of a target located behind or in the close proximity of natural gas diffusion flames. This measurement is critical for providing high-quality data in structural fire tests. The position of the laser relative to the flame envelope can significantly affect the measurement scatter, but has little influence on the mean values. We observe that the measurement scatter is normally distributed and increases linearly with the distance of the target from the flame along the beam path. Based on these observations, we demonstrate how time-averaging can be used to achieve a standard uncertainty associated with the displacement error of less than 0.1 mm, which is typically sufficient for structural fire testing applications. Measurements with the investigated blue laser sensors were not impeded by the thermal radiation emitted from the flame or the soot generated from the relatively clean-burning natural gas.

  12. Application of Blue Laser Triangulation Sensors for Displacement Measurement Through Fire

    PubMed Central

    Hoehler, Matthew S.; Smith, Christopher M.

    2016-01-01

    This paper explores the use of blue laser triangulation sensors to measure displacement of a target located behind or in the close proximity of natural gas diffusion flames. This measurement is critical for providing high-quality data in structural fire tests. The position of the laser relative to the flame envelope can significantly affect the measurement scatter, but has little influence on the mean values. We observe that the measurement scatter is normally distributed and increases linearly with the distance of the target from the flame along the beam path. Based on these observations, we demonstrate how time-averaging can be used to achieve a standard uncertainty associated with the displacement error of less than 0.1 mm, which is typically sufficient for structural fire testing applications. Measurements with the investigated blue laser sensors were not impeded by the thermal radiation emitted from the flame or the soot generated from the relatively clean-burning natural gas. PMID:28066131

  13. Wildfire risk assessment in a typical Mediterranean wildland-urban interface of Greece.

    PubMed

    Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita

    2015-04-01

    The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.

  14. Wildfire Risk Assessment in a Typical Mediterranean Wildland-Urban Interface of Greece

    NASA Astrophysics Data System (ADS)

    Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita

    2015-04-01

    The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.

  15. "Sniffer"—a novel tool for chasing vehicles and measuring traffic pollutants

    NASA Astrophysics Data System (ADS)

    Pirjola, L.; Parviainen, H.; Hussein, T.; Valli, A.; Hämeri, K.; Aaalto, P.; Virtanen, A.; Keskinen, J.; Pakkanen, T. A.; Mäkelä, T.; Hillamo, R. E.

    To measure traffic pollutants with high temporal and spatial resolution under real conditions a mobile laboratory was designed and built in Helsinki Polytechnic in close co-operation with the University of Helsinki. The equipment of the van provides gas phase measurements of CO and NO x, number size distribution measurements of fine and ultrafine particles by an electrical low pressure impactor, an ultrafine condensation particle counter and a scanning mobility particle sizer. Two inlet systems, one above the windshield and the other above the bumper, enable chasing of different type of vehicles. Also, meteorological and geographical parameters are recorded. This paper introduces the construction and technical details of the van, and presents data from the measurements performed during an LIPIKA campaign on the highway in Helsinki. Approximately 90% of the total particle number concentration was due to particles smaller than 50 nm on the highway in Helsinki. The peak concentrations exceeded often 200,000 particles cm -3 and reached sometimes a value of 10 6 cm -3. Typical size distribution of fine particles possessed bimodal structure with the modal mean diameters of 15-20 nm and ˜150 nm. Atmospheric dispersion of traffic pollutions were measured by moving away from the highway along the wind direction. At a distance of 120-140 m from the source the concentrations were diluted to one-tenth from the values at 9 m from the source.

  16. SYNCHROTRON ORIGIN OF THE TYPICAL GRB BAND FUNCTION—A CASE STUDY OF GRB 130606B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bin-Bin; Briggs, Michael S.; Uhm, Z. Lucas

    2016-01-10

    We perform a time-resolved spectral analysis of GRB 130606B within the framework of a fast-cooling synchrotron radiation model with magnetic field strength in the emission region decaying with time, as proposed by Uhm and Zhang. The data from all time intervals can be successfully fit by the model. The same data can be equally well fit by the empirical Band function with typical parameter values. Our results, which involve only minimal physical assumptions, offer one natural solution to the origin of the observed GRB spectra and imply that, at least some, if not all, Band-like GRB spectra with typical Bandmore » parameter values can indeed be explained by synchrotron radiation.« less

  17. Computation of transonic flow past projectiles at angle of attack

    NASA Technical Reports Server (NTRS)

    Reklis, R. P.; Sturek, W. B.; Bailey, F. R.

    1978-01-01

    Aerodynamic properties of artillery shell such as normal force and pitching moment reach peak values in a narrow transonic Mach number range. In order to compute these quantities, numerical techniques have been developed to obtain solutions to the three-dimensional transonic small disturbance equation about slender bodies at angle of attack. The computation is based on a plane relaxation technique involving Fourier transforms to partially decouple the three-dimensional difference equations. Particular care is taken to assure accurate solutions near corners found in shell designs. Computed surface pressures are compared to experimental measurements for circular arc and cone cylinder bodies which have been selected as test cases. Computed pitching moments are compared to range measurements for a typical projectile shape.

  18. Survey of A{sub LT'} asymmetries in semi-exclusive electron scattering on He4 and C12

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dan Protopopescu; et. Al.

    2005-02-21

    Single spin azimuthal asymmetries A{sub LT'} were measured at Jefferson Lab using 2.2 and 4.4 GeV longitudinally polarized electrons incident on {sup 4}He and {sup 12}C targets in the CLAS detector. A{sub LT'} is related to the imaginary part of the longitudinal-transverse interference and in quasifree nucleon knockout it provides an unambiguous signature for final state interactions (FSI). Experimental values of A{sub LT'} were found to be below 5%, typically |A{sub LT'}| < 3% for data with good statistical precision. Optical Model in Eikonal Approximation (OMEA) and Relativistic Multiple-Scattering Glauber Approximation (RMSGA) calculations are shown to be consistent with themore » measured asymmetries.« less

  19. The experimental electron mean-free-path in Si under typical (S)TEM conditions.

    PubMed

    Potapov, P L

    2014-12-01

    The electron mean-free-path in Si was measured by EELS using the test structure with the certified dimensions as a calibration standard. In a good agreement with the previous CBED measurements, the mean-free-path is 150nm for 200keV and 179nm for 300keV energy of primary electrons at large collection angles. These values are accurately predicted by the model of Iakoubovskii et al. while the model of Malis et al. incorporated in common microscopy software underestimates the mean-free-path by 15% at least. Correspondingly, the thickness of TEM samples reported in many studies of the Si-based materials last decades might be noticeably underestimated. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    NASA Astrophysics Data System (ADS)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  1. Benthic food web structure in the Comau fjord, Chile (∼42°S): Preliminary assessment including a site with chemosynthetic activity

    NASA Astrophysics Data System (ADS)

    Zapata-Hernández, Germán; Sellanes, Javier; Mayr, Christoph; Muñoz, Práxedes

    2014-12-01

    Using C and N stable isotopes we analyzed different trophic aspects of the benthic fauna at two sites in the Comau fjord: one with presence of venting of chemically reducing fluids and extensive patches of bacterial mats (XH: X-Huinay), and one control site (PG: Punta Gruesa) with a typical fjord benthic habitat. Due to the widespread presence of such microbial patches in the fjord and their recognized trophic role in reducing environments, we hypothesize that these microbial communities could be contributing to the assimilated food of consumers and transferring carbon into high trophic levels in the food web. Food sources in the area included macroalgae with a wide range of δ13C values (-34.7 to -11.9‰), particulate organic matter (POM, δ13C = -20.1‰), terrestrial organic matter (TOM, δ13C = -32.3‰ to -27.9‰) and chemosynthetic filamentous bacteria (δ13C = ∼-33‰). At both sites, fauna depicted typical values indicating photosynthetic production as a main food source (>-20‰). However, at XH selected taxa reported lower δ13C values (e.g. -26.5‰ in Nacella deaurata), suggesting a partial use of chemosynthetic production. Furthermore, enhanced variability at this site in δ13C values of the polyplacophoran Chiton magnificus, the limpet Fissurella picta and the tanaid Zeuxoides sp. may also be responding to the use of a wider scope of primary food sources. Trophic position estimates suggest three trophic levels of consumers at both sites. However, low δ15N values in some grazer and suspension-feeder species suggest that these taxa could be using other sources still to be identified (e.g. bacterial films, microalgae and organic particles of small size-fractions). Furthermore, between-site comparisons of isotopic niche width measurements in some trophic guilds indicate that grazers from XH have more heterogenic trophic niches than at PG (measured as mean distance to centroid and standard deviation of nearest neighbor distance). This last could be ascribed to the utilization of a mixture of photosynthetic and chemosynthetic carbon sources. In addition, corrected standard ellipses area (SEAc) values in suspension-feeders and carnivores at both sites suggest a similar magnitude of exploitation of food sources. However, grazers from XH have a greater expansion of their isotopic niche (SEAc), probably explained by the presence of species with low δ13C and δ15N values, and directly associated to chemosynthetic carbon incorporation.

  2. Fundamental investigation of ARC interruption in gas flows

    NASA Astrophysics Data System (ADS)

    Benenson, D. M.; Frind, G.; Kinsinger, R. E.; Nagamatsu, H. T.; Noeske, H. O.; Sheer, R. E., Jr.

    1980-07-01

    Thermal recovery in gas blast interrupters is discussed. The thermal recovery process was investigated with physical and aerodynamic methods, typically using reduced size nozzles and short sinusoidal current pulses. Aerodynamic characterization of the cold flow fields in several different nozzle types included measurements of the pressure and flow fields, both steady-state and turbulent components, with special attention given to wakes and shock structures. Special schlieren techniques on DC arcs and high speed photography on arcs in orifice nozzles show that shock heating broadens the arc independent of turbulence effects and produces a poorly recovering downstream arc section. Measured recovery speeds in both orifice and convergent-divergent nozzles agree with predictions of several arc theories assuming turbulent power losses. However, data on post-zero currents and power loss show values much smaller than theoretical predictions. Hydrogen, deuterium, and methane were measured.

  3. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  4. General and food-specific parenting: measures and interplay.

    PubMed

    Kremers, Stef; Sleddens, Ester; Gerards, Sanne; Gubbels, Jessica; Rodenburg, Gerda; Gevers, Dorus; van Assema, Patricia

    2013-08-01

    Parental influence on child food intake is typically conceptualized at three levels-parenting practices, feeding style, and parenting style. General parenting style is modeled at the most distal level of influence and food parenting practices are conceptualized as the most proximal level of influence. The goal of this article is to provide insights into contents and explanatory value of instruments that have been applied to assess food parenting practices, feeding style, and parenting style. Measures of food parenting practices, feeding style, and parenting style were reviewed, compared, and contrasted with regard to contents, explanatory value, and interrelationships. Measures that are used in the field often fail to cover the full scope and complexity of food parenting. Healthy parenting dimensions have generally been found to be positively associated with child food intake (i.e., healthier dietary intake and less intake of energy-dense food products and sugar-sweetened beverages), but effect sizes are low. Evidence for the operation of higher-order moderation has been found, in which the impact of proximal parental influences is moderated by more distal levels of parenting. Operationalizing parenting at different levels, while applying a contextual higher-order moderation approach, is advocated to have surplus value in understanding the complex process of parent-child interactions in the area of food intake. A research paradigm is presented that may guide future work regarding the conceptualization and modeling of parental influences on child dietary behavior.

  5. Multisite concordance of apparent diffusion coefficient measurements across the NCI Quantitative Imaging Network.

    PubMed

    Newitt, David C; Malyarenko, Dariya; Chenevert, Thomas L; Quarles, C Chad; Bell, Laura; Fedorov, Andriy; Fennessy, Fiona; Jacobs, Michael A; Solaiyappan, Meiyappan; Hectors, Stefanie; Taouli, Bachir; Muzi, Mark; Kinahan, Paul E; Schmainda, Kathleen M; Prah, Melissa A; Taber, Erin N; Kroenke, Christopher; Huang, Wei; Arlinghaus, Lori R; Yankeelov, Thomas E; Cao, Yue; Aryal, Madhava; Yen, Yi-Fen; Kalpathy-Cramer, Jayashree; Shukla-Dave, Amita; Fung, Maggie; Liang, Jiachao; Boss, Michael; Hylton, Nola

    2018-01-01

    Diffusion weighted MRI has become ubiquitous in many areas of medicine, including cancer diagnosis and treatment response monitoring. Reproducibility of diffusion metrics is essential for their acceptance as quantitative biomarkers in these areas. We examined the variability in the apparent diffusion coefficient (ADC) obtained from both postprocessing software implementations utilized by the NCI Quantitative Imaging Network and online scan time-generated ADC maps. Phantom and in vivo breast studies were evaluated for two ([Formula: see text]) and four ([Formula: see text]) [Formula: see text]-value diffusion metrics. Concordance of the majority of implementations was excellent for both phantom ADC measures and in vivo [Formula: see text], with relative biases [Formula: see text] ([Formula: see text]) and [Formula: see text] (phantom [Formula: see text]) but with higher deviations in ADC at the lowest phantom ADC values. In vivo [Formula: see text] concordance was good, with typical biases of [Formula: see text] to 3% but higher for online maps. Multiple b -value ADC implementations were separated into two groups determined by the fitting algorithm. Intergroup mean ADC differences ranged from negligible for phantom data to 2.8% for [Formula: see text] in vivo data. Some higher deviations were found for individual implementations and online parametric maps. Despite generally good concordance, implementation biases in ADC measures are sometimes significant and may be large enough to be of concern in multisite studies.

  6. Extraction of carrier mobility and interface trap density in InGaAs metal oxide semiconductor structures using gated Hall method

    NASA Astrophysics Data System (ADS)

    Chidambaram, Thenappan

    III-V semiconductors are potential candidates to replace Si as a channel material in next generation CMOS integrated circuits owing to their superior carrier mobilities. Low density of states (DOS) and typically high interface and border trap densities (Dit) in high mobility group III-V semiconductors provide difficulties in quantification of Dit near the conduction band edge. The trap response above the threshold voltage of a MOSFET can be very fast, and conventional Dit extraction methods, based on capacitance/conductance response (CV methods) of MOS capacitors at frequencies <1MHz, cannot distinguish conducting and trapped carriers. In addition, the CV methods have to deal with high dispersion in the accumulation region that makes it a difficult task to measure the true oxide capacitance, Cox value. Another implication of these properties of III-V interfaces is an ambiguity of determination of electron density in the MOSFET channel. Traditional evaluation of carrier density by integration of the C-V curve, gives incorrect values for D it and mobility. Here we employ gated Hall method to quantify the D it spectrum at the high-K oxide/III-V semiconductor interface for buried and surface channel devices using Hall measurement and capacitance-voltage data. Determination of electron density directly from Hall measurements allows for obtaining true mobility values.

  7. A geometric approach to non-linear correlations with intrinsic scatter

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2017-12-01

    We propose a new mathematical model for n - k-dimensional non-linear correlations with intrinsic scatter in n-dimensional data. The model is based on Riemannian geometry and is naturally symmetric with respect to the measured variables and invariant under coordinate transformations. We combine the model with a Bayesian approach for estimating the parameters of the correlation relation and the intrinsic scatter. A side benefit of the approach is that censored and truncated data sets and independent, arbitrary measurement errors can be incorporated. We also derive analytic likelihoods for the typical astrophysical use case of linear relations in n-dimensional Euclidean space. We pay particular attention to the case of linear regression in two dimensions and compare our results to existing methods. Finally, we apply our methodology to the well-known MBH-σ correlation between the mass of a supermassive black hole in the centre of a galactic bulge and the corresponding bulge velocity dispersion. The main result of our analysis is that the most likely slope of this correlation is ∼6 for the data sets used, rather than the values in the range of ∼4-5 typically quoted in the literature for these data.

  8. Tracking Urban Air Deterioration in San Francisco: Carbon and Nitrogen Isotope Study of Weedy Plants.

    NASA Astrophysics Data System (ADS)

    Colman, A. S.; Wessells, A.; Swaine, M. E.; Fogel, M. L.

    2003-12-01

    Stable isotopes of carbon and nitrogen have long been used as indicators of ecosystem structure and nutrient cycling in natural and anthropogenically disturbed terrestrial ecosytems. However, relatively few of these studies have targeted urban environments, where nitrogen and CO2 emissions dramatically impact atmospheric composition. Here we present the results of carbon and nitrogen isotope analyses of herbaceous plants growing in and around San Francisco. These plants were collected mainly as part of a public outreach walking tour of San Francisco ("The Weed Walk - Concrete Jungle") sponsored by the San Francisco Exploratorium. In all cases, the plants were sampled in areas with negligible forest canopy. A consortium of species was collected at each of several distinct sites to examine the localized and regional impact of automobile traffic and proximity to the ocean on isotopic compositions of carbon and nitrogen. δ 13C measurements trend towards relatively light values in the range of --26 to --36 permil. In comparison, the leaves from similar types of herbaceous species in relatively unpolluted and unforested environments typically have δ 13C values in the range of --22 to --28 permil. The observed light carbon isotopic compositions potentially reflect input of isotopically light CO2 emissions from fossil fuel burning, boosting atmospheric CO2 concentrations to >10 % above background. δ 15N values range from +4 to +9 permil. This is substantially offset from the --4 to +1 permil values that typify vegetation in regions where nitrogen oxides from fossil fuel combustion dominate the nitrogen inputs. The nitrogen isotope compositions might suggest nitrogen contributions from a marine source (typically +6 permil).

  9. Use of a liquid-crystal, heater-element composite for quantitative, high-resolution heat transfer coefficients on a turbine airfoil, including turbulence and surface roughness effects

    NASA Astrophysics Data System (ADS)

    Hippensteele, Steven A.; Russell, Louis M.; Torres, Felix J.

    1987-05-01

    Local heat transfer coefficients were measured along the midchord of a three-times-size turbine vane airfoil in a static cascade operated at roon temperature over a range of Reynolds numbers. The test surface consisted of a composite of commercially available materials: a Mylar sheet with a layer of cholestric liquid crystals, which change color with temperature, and a heater made of a polyester sheet coated with vapor-deposited gold, which produces uniform heat flux. After the initial selection and calibration of the composite sheet, accurate, quantitative, and continuous heat transfer coefficients were mapped over the airfoil surface. Tests were conducted at two free-stream turbulence intensities: 0.6 percent, which is typical of wind tunnels; and 10 percent, which is typical of real engine conditions. In addition to a smooth airfoil, the effects of local leading-edge sand roughness were also examined for a value greater than the critical roughness. The local heat transfer coefficients are presented for both free-stream turbulence intensities for inlet Reynolds numbers from 1.20 to 5.55 x 10 to the 5th power. Comparisons are also made with analytical values of heat transfer coefficients obtained from the STAN5 boundary layer code.

  10. Use of a liquid-crystal and heater-element composite for quantitative, high-resolution heat-transfer coefficients on a turbine airfoil including turbulence and surface-roughness effects

    NASA Astrophysics Data System (ADS)

    Hippensteele, S. A.; Russell, L. M.; Torres, F. J.

    Local heat transfer coefficients were measured along the midchord of a three-times-size turbine vane airfoil in a static cascade operated at room temperature over a range of Reynolds numbers. The test surface consisted of a composite of commercially available materials: a Mylar sheet with a layer of cholestric liquid crystals, which change color with temperature, and a heater made of a polyester sheet coated with vapor-deposited gold, which produces uniform heat flux. After the initial selection and calibration of the composite sheet, accurate, quantitative, and continuous heat transfer coefficients were mapped over the airfoil surface. Tests were conducted at two free-stream turbulence intensities: 0.6 percent, which is typical of wind tunnels; and 10 percent, which is typical of real engine conditions. In addition to a smooth airfoil, the effects of local leading-edge sand roughness were also examined for a value greater than the critical roughness. The local heat transfer coefficients are presented for both free-stream turbulence intensities for inlet Reynolds numbers from 1.20 to 5.55 x 10 to the 5th power. Comparisons are also made with analytical values of heat transfer coefficients obtained from the STAN5 boundary layer code.

  11. Use of a liquid-crystal, heater-element composite for quantitative, high-resolution heat transfer coefficients on a turbine airfoil, including turbulence and surface roughness effects

    NASA Technical Reports Server (NTRS)

    Hippensteele, Steven A.; Russell, Louis M.; Torres, Felix J.

    1987-01-01

    Local heat transfer coefficients were measured along the midchord of a three-times-size turbine vane airfoil in a static cascade operated at roon temperature over a range of Reynolds numbers. The test surface consisted of a composite of commercially available materials: a Mylar sheet with a layer of cholestric liquid crystals, which change color with temperature, and a heater made of a polyester sheet coated with vapor-deposited gold, which produces uniform heat flux. After the initial selection and calibration of the composite sheet, accurate, quantitative, and continuous heat transfer coefficients were mapped over the airfoil surface. Tests were conducted at two free-stream turbulence intensities: 0.6 percent, which is typical of wind tunnels; and 10 percent, which is typical of real engine conditions. In addition to a smooth airfoil, the effects of local leading-edge sand roughness were also examined for a value greater than the critical roughness. The local heat transfer coefficients are presented for both free-stream turbulence intensities for inlet Reynolds numbers from 1.20 to 5.55 x 10 to the 5th power. Comparisons are also made with analytical values of heat transfer coefficients obtained from the STAN5 boundary layer code.

  12. The frequency-dependent response of single aerosol particles to vapour phase oscillations and its application in measuring diffusion coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Thomas C.; Davies, James F.; Wilson, Kevin R.

    A new method for measuring diffusion in the condensed phase of single aerosol particles is proposed and demonstrated. The technique is based on the frequency-dependent response of a binary particle to oscillations in the vapour phase of one of its chemical components. Here, we discuss how this physical situation allows for what would typically be a non-linear boundary value problem to be approximately reduced to a linear boundary value problem. For the case of aqueous aerosol particles, we investigate the accuracy of the closed-form analytical solution to this linear problem through a comparison with the numerical solution of the fullmore » problem. Then, using experimentally measured whispering gallery modes to track the frequency-dependent response of aqueous particles to relative humidity oscillations, we determine diffusion coefficients as a function of water activity. The measured diffusion coefficients are compared to previously reported values found using the two common experiments: (i) the analysis of the sorption/desorption of water from a particle after a step-wise change to the surrounding relative humidity and (ii) the isotopic exchange of water between a particle and the vapour phase. The technique presented here has two main strengths: first, when compared to the sorption/desorption experiment, it does not require the numerical evaluation of a boundary value problem during the fitting process as a closed-form expression is available. Second, when compared to the isotope exchange experiment, it does not require the use of labeled molecules. Therefore, the frequency-dependent experiment retains the advantages of these two commonly used methods but does not suffer from their drawbacks.« less

  13. The frequency-dependent response of single aerosol particles to vapour phase oscillations and its application in measuring diffusion coefficients

    DOE PAGES

    Preston, Thomas C.; Davies, James F.; Wilson, Kevin R.

    2017-01-13

    A new method for measuring diffusion in the condensed phase of single aerosol particles is proposed and demonstrated. The technique is based on the frequency-dependent response of a binary particle to oscillations in the vapour phase of one of its chemical components. Here, we discuss how this physical situation allows for what would typically be a non-linear boundary value problem to be approximately reduced to a linear boundary value problem. For the case of aqueous aerosol particles, we investigate the accuracy of the closed-form analytical solution to this linear problem through a comparison with the numerical solution of the fullmore » problem. Then, using experimentally measured whispering gallery modes to track the frequency-dependent response of aqueous particles to relative humidity oscillations, we determine diffusion coefficients as a function of water activity. The measured diffusion coefficients are compared to previously reported values found using the two common experiments: (i) the analysis of the sorption/desorption of water from a particle after a step-wise change to the surrounding relative humidity and (ii) the isotopic exchange of water between a particle and the vapour phase. The technique presented here has two main strengths: first, when compared to the sorption/desorption experiment, it does not require the numerical evaluation of a boundary value problem during the fitting process as a closed-form expression is available. Second, when compared to the isotope exchange experiment, it does not require the use of labeled molecules. Therefore, the frequency-dependent experiment retains the advantages of these two commonly used methods but does not suffer from their drawbacks.« less

  14. Vertical profiles of selected mean and turbulent characteristics of the boundary layer within and above a large banana screenhouse

    NASA Astrophysics Data System (ADS)

    Tanny, Josef; Lukyanov, Victor; Neiman, Michael; Cohen, Shabtai; Teitel, Meir

    2017-04-01

    The area of agricultural crops covered by screens is constantly increasing worldwide. While irrigation requirements for open canopies are well documented, corresponding information for covered crops is scarce. Therefore much effort in recent years has focused on measuring and modeling evapotranspiration of screen-covered crops. One model that can be utilized for such estimations is the mixing length model. As a first step towards future application of this model, selected mean and turbulent properties of the boundary layer above and below a shading screen were measured and analyzed. Experiments were carried out in a large banana plantation, covered by a light-weight horizontal shading screen deployed 5.5 m high. During the measurement period, plant height increased from 2.5 to 4.1 m. A 3D ultrasonic anemometer and temperature and humidity sensors were mounted on a lifting tower with a manual crank that could measure between 2.8 and 10.2 m height, i.e., both below and above the screen. In each profile, the sensors measured at different heights during consecutive time intervals of about 15 min each. Vertical profiles were measured around noon when external meteorological conditions were relatively stable. An additional stationary tower installed within the screenhouse about 20 m to the north of the lifting tower, continuously measured corresponding reference values at 4.5 m height. Footprint analysis shows that out of the 62 measured time intervals, only in 4 cases the 90% flux contribution originated from outside the screenhouse. Both horizontal air velocity, Uh, and normalized horizontal air velocity increased with height. Air temperature generally decreased with height, indicating that the boundary layer was statically unstable. Specific humidity decreased with height, as is typical for a well irrigated crop. Friction velocity, u∗, was higher above than below the screen, illustrating the role of the screen as a momentum sink. The mean ratio between friction velocity below and above the screen was 0.55. Vertical profiles of the surface drag coefficientCd = (u∗/U h)2 showed a consistent decease of √Cd-with height, mainly above the screen. This result is expected since, with a constant flux layer, the surface drag is bound to decrease with height. The energy spectrum of each velocity component, both below and above the screen, was calculated separately and their sum, the 3D spectrum (Tennekes and Lumely, 1972), was plotted as a function of frequency. Slopes of linear fits to the spectra ranged between -1.42 and -1.68, with a mean value of -1.59±0.04. These slopes are close to -5/3 (-1.67), the value typical of the inertial subrange in steady state turbulent boundary layers (Stull, 1988).

  15. Application of acoustic doppler current profilers for measuring three-dimensional flow fields and as a surrogate measurement of bedload transport

    USGS Publications Warehouse

    Conaway, Jeffrey S.

    2005-01-01

    Acoustic Doppler current profilers (ADCPs) have been in use in the riverine environment for nearly 20 years. Their application primarily has been focused on the measurement of streamflow discharge. ADCPs emit high-frequency sound pulses and receive reflected sound echoes from sediment particles in the water column. The Doppler shift between transmitted and return signals is resolved into a velocity component that is measured in three dimensions by simultaneously transmitting four independent acoustical pulses. To measure the absolute velocity magnitude and direction in the water column, the velocity magnitude and direction of the instrument must also be computed. Typically this is accomplished by ensonifying the streambed with an acoustical pulse that also provides a depth measurement for each of the four acoustic beams. Sediment transport on or near the streambed will bias these measurements and requires external positioning such as a differentially corrected Global Positioning Systems (GPS). Although the influence of hydraulic structures such as spur dikes and bridge piers is typically only measured and described in one or two dimensions, the use of differentially corrected GPS with ADCPs provides a fully three-dimensional measurement of the magnitude and direction of the water column at such structures. The measurement of these flow disturbances in a field setting also captures the natural pulsations of river flow that cannot be easily quantified or modeled by numerical simulations or flumes. Several examples of measured three-dimensional flow conditions at bridge sites throughout Alaska are presented. The bias introduced to the bottom-track measurement is being investigated as a surrogate measurement of bedload transport. By fixing the position of the ADCP for a known period of time the apparent velocity of the streambed at that position can be determined. Initial results and comparison to traditionally measured bedload values are presented. These initial results and those by other researchers are helping to determine a direction for further research of noncontact measurements of sediment transport. Copyright ASCE 2005.

  16. Optimizing value utilizing Toyota Kata methodology in a multidisciplinary clinic.

    PubMed

    Merguerian, Paul A; Grady, Richard; Waldhausen, John; Libby, Arlene; Murphy, Whitney; Melzer, Lilah; Avansino, Jeffrey

    2015-08-01

    Value in healthcare is measured in terms of patient outcomes achieved per dollar expended. Outcomes and cost must be measured at the patient level to optimize value. Multidisciplinary clinics have been shown to be effective in providing coordinated and comprehensive care with improved outcomes, yet tend to have higher cost than typical clinics. We sought to lower individual patient cost and optimize value in a pediatric multidisciplinary reconstructive pelvic medicine (RPM) clinic. The RPM clinic is a multidisciplinary clinic that takes care of patients with anomalies of the pelvic organs. The specialties involved include Urology, General Surgery, Gynecology, and Gastroenterology/Motility. From May 2012 to November 2014 we performed time-driven activity-based costing (TDABC) analysis by measuring provider time for each step in the patient flow. Using observed time and the estimated hourly cost of each of the providers we calculated the final cost at the individual patient level, targeting clinic preparation. We utilized Toyota Kata methodology to enhance operational efficiency in an effort to optimize value. Variables measured included cost, time to perform a task, number of patients seen in clinic, percent value-added time (VAT) to patients (face to face time) and family experience scores (FES). At the beginning of the study period, clinic costs were $619 per patient. We reduced conference time from 6 min/patient to 1 min per patient, physician preparation time from 8 min to 6 min and increased Medical Assistant (MA) preparation time from 9.5 min to 20 min, achieving a cost reduction of 41% to $366 per patient. Continued improvements further reduced the MA preparation time to 14 min and the MD preparation time to 5 min with a further cost reduction to $194 (69%) (Figure). During this study period, we increased the number of appointments per clinic. We demonstrated sustained improvement in FES with regards to the families overall experience with their providers. Value added time was increased from 60% to 78% but this was not significant. Time-based cost analysis effectively measures individualized patient cost. We achieved a 69% reduction in clinic preparation costs. Despite this reduction in costs, we were able to maintain VAT and sustain improvements in family experience. In caring for complex patients, lean management methodology enables optimization of value in a multidisciplinary clinic. Copyright © 2015. Published by Elsevier Ltd.

  17. Fundamentals of Research Data and Variables: The Devil Is in the Details.

    PubMed

    Vetter, Thomas R

    2017-10-01

    Designing, conducting, analyzing, reporting, and interpreting the findings of a research study require an understanding of the types and characteristics of data and variables. Descriptive statistics are typically used simply to calculate, describe, and summarize the collected research data in a logical, meaningful, and efficient way. Inferential statistics allow researchers to make a valid estimate of the association between an intervention and the treatment effect in a specific population, based upon their randomly collected, representative sample data. Categorical data can be either dichotomous or polytomous. Dichotomous data have only 2 categories, and thus are considered binary. Polytomous data have more than 2 categories. Unlike dichotomous and polytomous data, ordinal data are rank ordered, typically based on a numerical scale that is comprised of a small set of discrete classes or integers. Continuous data are measured on a continuum and can have any numeric value over this continuous range. Continuous data can be meaningfully divided into smaller and smaller or finer and finer increments, depending upon the precision of the measurement instrument. Interval data are a form of continuous data in which equal intervals represent equal differences in the property being measured. Ratio data are another form of continuous data, which have the same properties as interval data, plus a true definition of an absolute zero point, and the ratios of the values on the measurement scale make sense. The normal (Gaussian) distribution ("bell-shaped curve") is of the most common statistical distributions. Many applied inferential statistical tests are predicated on the assumption that the analyzed data follow a normal distribution. The histogram and the Q-Q plot are 2 graphical methods to assess if a set of data have a normal distribution (display "normality"). The Shapiro-Wilk test and the Kolmogorov-Smirnov test are 2 well-known and historically widely applied quantitative methods to assess for data normality. Parametric statistical tests make certain assumptions about the characteristics and/or parameters of the underlying population distribution upon which the test is based, whereas nonparametric tests make fewer or less rigorous assumptions. If the normality test concludes that the study data deviate significantly from a Gaussian distribution, rather than applying a less robust nonparametric test, the problem can potentially be remedied by judiciously and openly: (1) performing a data transformation of all the data values; or (2) eliminating any obvious data outlier(s).

  18. Salton Sea 1/sup 0/ x 2/sup 0/ NTMS area California and Arizona: data report (abbreviated)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heffner, J.D.

    1980-09-01

    Surface sediment samples were collected at 997 sites. Ground water samples were collected at 76 sites. Neutron activation analysis results are given for uranium and 16 other elements in sediments, and for uranium and 9 other elements in ground water. Mass spectrometry results are given for helium in ground water. Data from ground water sites include (1) water chemistry measurements (pH, conductivity, and alkalinity) (2) physical measurements (water temperature, well description where applicable, and scintillometer reading) and (3) elemental analyses (U, Al, Br, Cl, Dy, F, He, Mg, Mn, Na and V). Data from sediment sites include (1) stream watermore » chemistry measurements from sites where water was available and (2) elemental analyses (U, Th, Hf, Al, Ce, Dy, Eu, Fe, La, Lu, Mn, Sc, Sm, Na, Ti, V, and Yb). Sample site descriptors are given. Areal distribution maps, histograms, and cumulative frequency plots for the elements listed above; U/Th and U/Hf ratios; and scintillometer readings at sediment sample sites are included. Analyses of the sediment fraction finer than 149..mu..m show high uranium values clustered in the Eagle and Chuckwalla Mountains. High uranium values in the 420 ..mu..m to 1000 ..mu..m fraction are clustered in the McCoy Mountains. Both fractions show groups of high values in the Chocolate Mountains at the Southeastern edge of the Chocolate Mountains Aerial Gunnery Range. Aerial distribution of analytical values shows that high values of many elements in both size fractions are grouped around the Eagle Mountains and the Chuckwalla Mountains. Fe, Mn, Ti, V, Sc, Hf, and the rare earth elements, all of which typically occur in high-density minerals, have higher average (log mean) concentrations in the finer fraction than in the coarser fraction.« less

  19. Temporal evolution of UV opacity and dust particle size at Gale Crater from MSL/REMS measurements

    NASA Astrophysics Data System (ADS)

    Vicente-Retortillo, Álvaro; Martinez, German; Renno, Nilton O.; Lemmon, Mark T.; Mason, Emily; De la Torre, Manuel

    2016-10-01

    A better characterization of the size, radiative properties and temporal variability of suspended dust in the Martian atmosphere is necessary to improve our understanding of the current climate of Mars. The REMS UV sensor onboard the Mars Science Laboratory (MSL) Curiosity rover has performed ground-based measurements of solar radiation in six different UV spectral bands for the first time on Mars.We developed a novel technique to retrieve dust opacity and particle size from REMS UV measurements. We use the electrical output current (TELRDR products) of the six photodiodes and the ancillary data (ADR products) to avoid inconsistencies found in the processed data (units of W/m2) when the solar zenith angle is above 30°. In addition, we use TELRDR and ADR data only in events during which the Sun is temporally blocked by the rover's masthead or mast to mitigate uncertainties associated to the degradation of the sensor due to the deposition of dust on it. Then we use a radiative transfer model with updated dust properties based on the Monte-Carlo method to retrieve the dust opacity and particle size.We find that the seasonal trend of UV opacity is consistent with opacity values at 880 nm derived from Mastcam images of the Sun, with annual maximum values in spring and in summer and minimum values in winter. The interannual variability is low, with two local maxima in mid-spring and mid-summer. Finally, dust particle size also varies throughout the year with typical values of the effective radius in the range between 0.5 and 2 μm. These variations in particle size occur in a similar way to those in dust opacity; the smallest sizes are found when the opacity values are the lowest.

  20. Study of atmospheric CH4 mole fractions at three WMO/GAW stations in China

    NASA Astrophysics Data System (ADS)

    Fang, Shuang-Xi; Zhou, Ling-Xi; Masarie, Kenneth A.; Xu, Lin; Rella, Chris W.

    2013-05-01

    CH4 mole fractions were continuously measured from 2009 to 2011 at three WMO/GAW stations in China (Lin'an, LAN; Longfengshan, LFS; and Waliguan, WLG) using three Cavity Ring Down Spectroscopy instruments. LAN and LFS are GAW regional measurement stations. LAN is located in China's most economically developed region, and LFS is in a rice production area (planting area > 40,000 km2). WLG is a global measurement station in remote northwest China. At LAN, high methane mole fractions are observed in all seasons. Surface winds from the northeast enhance CH4 values, with a maximum increase of 32 ± 15 ppb in summer. The peak to peak amplitude of the seasonal cycle is 77 ± 35 ppb. At LFS, the diurnal cycle amplitude is approximately constant throughout the year except summer, when a value of 196 ± 65 ppb is observed. CH4 values at LFS reach their peak in July, which is different from seasonal variations typically observed in the northern hemisphere. CH4 mole fractions at WLG show both the smallest values and the lowest variability. Maximum values occur during summer, which is different from other northern hemisphere WMO/GAW global stations. The seasonal cycle amplitude is 17 ± 11 ppb. The linear growth rates at LAN, LFS, and WLG are 8.0 ± 1.2, 7.9 ± 0.9, and 9.4 ± 0.2 ppb yr-1, respectively, which are all larger than the global mean over the same 3 year period. Results from this study attempt to improve our basic understanding of observed atmospheric CH4 in China.

  1. Investigation of differences between field and laboratory pH measurements of national atmospheric deposition program/national trends network precipitation samples

    USGS Publications Warehouse

    Latysh, N.; Gordon, J.

    2004-01-01

    A study was undertaken to investigate differences between laboratory and field pH measurements for precipitation samples collected from 135 weekly precipitation-monitoring sites in the National Trends Network from 12/30/1986 to 12/28/1999. Differences in pH between field and laboratory measurements occurred for 96% of samples collected during this time period. Differences between the two measurements were evaluated for precipitation samples collected before and after January 1994, when modifications to sample-handling protocol and elimination of the contaminating bucket o-ring used in sample shipment occurred. Median hydrogen-ion and pH differences between field and laboratory measurements declined from 3.9 ??eq L-1 or 0.10 pH units before the 1994 protocol change to 1.4 ??eq L-1 or 0.04 pH units after the 1994 protocol change. Hydrogen-ion differences between field and laboratory measurements had a high correlation with the sample pH determined in the field. The largest pH differences between the two measurements occurred for high-pH samples (>5.6), typical of precipitation collected in Western United States; however low- pH samples (<5.0) displayed the highest variability in hydrogen-ion differences between field and laboratory analyses. Properly screened field pH measurements are a useful alternative to laboratory pH values for trend analysis, particularly before 1994 when laboratory pH values were influenced by sample-collection equipment.

  2. Modeling and validation of spectral BRDF on material surface of space target

    NASA Astrophysics Data System (ADS)

    Hou, Qingyu; Zhi, Xiyang; Zhang, Huili; Zhang, Wei

    2014-11-01

    The modeling and the validation methods of the spectral BRDF on the material surface of space target were presented. First, the microscopic characteristics of the space targets' material surface were analyzed based on fiber-optic spectrometer using to measure the direction reflectivity of the typical materials surface. To determine the material surface of space target is isotropic, atomic force microscopy was used to measure the material surface structure of space target and obtain Gaussian distribution model of microscopic surface element height. Then, the spectral BRDF model based on that the characteristics of the material surface were isotropic and the surface micro-facet with the Gaussian distribution which we obtained was constructed. The model characterizes smooth and rough surface well for describing the material surface of the space target appropriately. Finally, a spectral BRDF measurement platform in a laboratory was set up, which contains tungsten halogen lamp lighting system, fiber optic spectrometer detection system and measuring mechanical systems with controlling the entire experimental measurement and collecting measurement data by computers automatically. Yellow thermal control material and solar cell were measured with the spectral BRDF, which showed the relationship between the reflection angle and BRDF values at three wavelengths in 380nm, 550nm, 780nm, and the difference between theoretical model values and the measured data was evaluated by relative RMS error. Data analysis shows that the relative RMS error is less than 6%, which verified the correctness of the spectral BRDF model.

  3. Seeing the light: the effects of particles, dissolved materials, and temperature on in situ measurements of DOM fluorescence in rivers and streams

    USGS Publications Warehouse

    Downing, Bryan D.; Pellerin, Brian A.; Bergamaschi, Brian A.; Saraceno, John Franco; Kraus, Tamara E.C.

    2012-01-01

    Field-deployable sensors designed to continuously measure the fluorescence of colored dissolved organic matter (FDOM) in situ are of growing interest. However, the ability to make FDOM measurements that are comparable across sites and over time requires a clear understanding of how instrument characteristics and environmental conditions affect the measurements. In particular, the effects of water temperature and light attenuation by both colored dissolved material and suspended particles may be significant in settings such as rivers and streams. Using natural standard reference materials, we characterized the performance of four commercially-available FDOM sensors under controlled laboratory conditions over ranges of temperature, dissolved organic matter (DOM) concentrations, and turbidity that spanned typical environmental ranges. We also examined field data from several major rivers to assess how often attenuation artifacts or temperature effects might be important. We found that raw (uncorrected) FDOM values were strongly affected by the light attenuation that results from dissolved substances and suspended particles as well as by water temperature. Observed effects of light attenuation and temperature agreed well with theory. Our results show that correction of measured FDOM values to account for these effects is necessary and feasible over much of the range of temperature, DOM concentration, and turbidity commonly encountered in surface waters. In most cases, collecting high-quality FDOM measurements that are comparable through time and between sites will require concurrent measurements of temperature and turbidity, and periodic discrete sample collection for laboratory measurement of DOM.

  4. Evaluation of the accuracy, consistency, and stability of measurements of the Planck constant used in the redefinition of the international system of units

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Schlamminger, Stephan; Stoudt, Sara; Pratt, Jon R.; Williams, Carl J.

    2018-02-01

    The Consultative Committee for Mass and related quantities (CCM), of the International Committee for weights and measures (CIPM), has recently declared the readiness of the community to support the redefinition of the international system of units (SI) at the next meeting of the General Conference on Weights and Measures (CGPM) scheduled for November, 2018. Such redefinition will replace the international prototype of the Kilogram (IPK), as the definition and sole primary realization of the unit of mass, with a definition involving the Planck constant, h. This redefinition in terms of a fundamental constant of nature will enable widespread primary realizations not only of the kilogram but also of its multiples and sub-multiples, best to address the full range of practical needs in the measurement of mass. We review and discuss the statistical models and statistical data reductions, uncertainty evaluations, and substantive arguments that support the verification of several technical preconditions for the redefinition that the CCM has established, and whose verification the CCM has affirmed. These conditions relate to the accuracy and mutual consistency of qualifying measurement results. We review also an issue that has surfaced only recently, concerning the convergence toward a stable value, of the historical values that the task group on fundamental constants of the committee on Data for Science and Technology CODATA-TGFC has recommended for h over the years, even though the CCM has not deemed this issue to be relevant. We conclude that no statistically significant trend can be substantiated for these recommended values, but note that cumulative consensus values that may be derived from the historical measurement results for h seem to have converged while continuing to exhibit fluctuations that are typical of a process in statistical control. Finally, we argue that the most recent consensus value derived from the best measurements available for h, obtained using either a Kibble balance or the XRCD method, is reliable and has uncertainty no larger than the uncertainties surrounding the current primary and secondary realizations of the unit of mass, hence that no credible technical impediments stand in the way of the redefinition of the unit of mass in terms of a fixed value of h.

  5. Measurement Properties of Performance-Specific Pain Ratings of Patients Awaiting Total Joint Arthroplasty as a Consequence of Osteoarthritis

    PubMed Central

    Stratford, Paul W.; Kennedy, Deborah M.; Woodhouse, Linda J.; Spadoni, Gregory

    2008-01-01

    Purpose: To estimate the test–retest reliability of the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain sub-scale and performance-specific assessments of pain, as well as the association between these measures for patients awaiting primary total hip or knee arthroplasty as a consequence of osteoarthritis. Methods: A total of 164 patients awaiting unilateral primary hip or knee arthroplasty completed four performance measures (self-paced walk, timed up and go, stair test, six-minute walk) and the WOMAC. Scores for 22 of these patients provided test–retest reliability data. Estimates of test–retest reliability (Type 2,1 intraclass correlation coefficient [ICC] and standard error of measurement [SEM]) and the association between measures were examined. Results: ICC values for individual performance-specific pain ratings were between 0.70 and 0.86; SEM values were between 0.97 and 1.33 pain points. ICC estimates for the four-item performance pain ratings and the WOMAC pain sub-scale were 0.82 and 0.57 respectively. The correlation between the sum of the pain scores for the four performance measures and the WOMAC pain sub-scale was 0.62. Conclusion: Reliability estimates for the performance-specific assessments of pain using the numeric pain rating scale were consistent with values reported for patients with a spectrum of musculoskeletal conditions. The reliability estimate for the WOMAC pain sub-scale was lower than typically reported in the literature. The level of association between the WOMAC pain sub-scale and the various performance-specific pain scales suggests that the scores can be used interchangeably when applied to groups but not for individual patients. PMID:20145758

  6. Rheometry of polymer melts using processing machines

    NASA Astrophysics Data System (ADS)

    Friesenbichler, Walter; Neunhäuserer, Andreas; Duretek, Ivica

    2016-08-01

    The technology of slit-die rheometry came into practice in the early 1960s. This technique enables engineers to measure the pressure drop very precisely along the slit die. Furthermore, slit-die rheometry widens up the measurable shear rate range and it is possible to characterize rheological properties of complicated materials such as wall slipping PVCs and high-filled compounds like long fiber reinforced thermoplastics and PIM-Feedstocks. With the use of slit-die systems in polymer processing machines e.g., Rauwendaal extrusion rheometer, by-pass extrusion rheometer, injection molding machine rheometers, new possibilities regarding rheological characterization of thermoplastics and elastomers at processing conditions near to practice opened up. Special slit-die systems allow the examination of the pressure-dependent viscosity and the characterization of cross-linking elastomers because of melt preparation and reachable shear rates comparable to typical processing conditions. As a result of the viscous dissipation in shear and elongational flows, when performing rheological measurements for high-viscous elastomers, temperature-correction of the apparent values has to be made. This technique was refined over the last years at Montanuniversitaet. Nowadays it is possible to characterize all sorts of rheological complicated polymeric materials under process- relevant conditions with viscosity values fully temperature corrected.

  7. Acoustical conditions for speech communication in active elementary school classrooms

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi; Bradley, John

    2005-04-01

    Detailed acoustical measurements were made in 34 active elementary school classrooms with typical rectangular room shape in schools near Ottawa, Canada. There was an average of 21 students in classrooms. The measurements were made to obtain accurate indications of the acoustical quality of conditions for speech communication during actual teaching activities. Mean speech and noise levels were determined from the distribution of recorded sound levels and the average speech-to-noise ratio was 11 dBA. Measured mid-frequency reverberation times (RT) during the same occupied conditions varied from 0.3 to 0.6 s, and were a little less than for the unoccupied rooms. RT values were not related to noise levels. Octave band speech and noise levels, useful-to-detrimental ratios, and Speech Transmission Index values were also determined. Key results included: (1) The average vocal effort of teachers corresponded to louder than Pearsons Raised voice level; (2) teachers increase their voice level to overcome ambient noise; (3) effective speech levels can be enhanced by up to 5 dB by early reflection energy; and (4) student activity is seen to be the dominant noise source, increasing average noise levels by up to 10 dBA during teaching activities. [Work supported by CLLRnet.

  8. Measurement invariance versus selection invariance: is fair selection possible?

    PubMed

    Borsboom, Denny; Romeijn, Jan-Willem; Wicherts, Jelte M

    2008-06-01

    This article shows that measurement invariance (defined in terms of an invariant measurement model in different groups) is generally inconsistent with selection invariance (defined in terms of equal sensitivity and specificity across groups). In particular, when a unidimensional measurement instrument is used and group differences are present in the location but not in the variance of the latent distribution, sensitivity and positive predictive value will be higher in the group at the higher end of the latent dimension, whereas specificity and negative predictive value will be higher in the group at the lower end of the latent dimension. When latent variances are unequal, the differences in these quantities depend on the size of group differences in variances relative to the size of group differences in means. The effect originates as a special case of Simpson's paradox, which arises because the observed score distribution is collapsed into an accept-reject dichotomy. Simulations show the effect can be substantial in realistic situations. It is suggested that the effect may be partly responsible for overprediction in minority groups as typically found in empirical studies on differential academic performance. A methodological solution to the problem is suggested, and social policy implications are discussed. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  9. Monte Carlo study of out-of-field exposure in carbon-ion radiotherapy with a passive beam: Organ doses in prostate cancer treatment.

    PubMed

    Yonai, Shunsuke; Matsufuji, Naruhiro; Akahane, Keiichi

    2018-04-23

    The aim of this work was to estimate typical dose equivalents to out-of-field organs during carbon-ion radiotherapy (CIRT) with a passive beam for prostate cancer treatment. Additionally, sensitivity analyses of organ doses for various beam parameters and phantom sizes were performed. Because the CIRT out-of-field dose depends on the beam parameters, the typical values of those parameters were determined from statistical data on the target properties of patients who received CIRT at the Heavy-Ion Medical Accelerator in Chiba (HIMAC). Using these typical beam-parameter values, out-of-field organ dose equivalents during CIRT for typical prostate treatment were estimated by Monte Carlo simulations using the Particle and Heavy-Ion Transport Code System (PHITS) and the ICRP reference phantom. The results showed that the dose decreased with distance from the target, ranging from 116 mSv in the testes to 7 mSv in the brain. The organ dose equivalents per treatment dose were lower than those either in 6-MV intensity-modulated radiotherapy or in brachytherapy with an Ir-192 source for organs within 40 cm of the target. Sensitivity analyses established that the differences from typical values were within ∼30% for all organs, except the sigmoid colon. The typical out-of-field organ dose equivalents during passive-beam CIRT were shown. The low sensitivity of the dose equivalent in organs farther than 20 cm from the target indicated that individual dose assessments required for retrospective epidemiological studies may be limited to organs around the target in cases of passive-beam CIRT for prostate cancer. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Specifying the ISS Plasma Environment

    NASA Technical Reports Server (NTRS)

    Minow, Joseph I.; Diekmann, Anne; Neergaard, Linda; Bui, Them; Mikatarian, Ronald; Barsamian, Hagop; Koontz, Steven

    2002-01-01

    Quantifying the spacecraft charging risks and corresponding hazards for the International Space Station (ISS) requires a plasma environment specification describing the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IN) model typically only provide estimates of long term (seasonal) mean Te and Ne values for the low Earth orbit environment. Knowledge of the Te and Ne variability as well as the likelihood of extreme deviations from the mean values are required to estimate both the magnitude and frequency of occurrence of potentially hazardous spacecraft charging environments for a given ISS construction stage and flight configuration. This paper describes the statistical analysis of historical ionospheric low Earth orbit plasma measurements used to estimate Ne, Te variability in the ISS flight environment. The statistical variability analysis of Ne and Te enables calculation of the expected frequency of occurrence of any particular values of Ne and Te, especially those that correspond to possibly hazardous spacecraft charging environments. The database used in the original analysis included measurements from the AE-C, AE-D, and DE-2 satellites. Recent work on the database has added additional satellites to the database and ground based incoherent scatter radar observations as well. Deviations of the data values from the IRI estimated Ne, Te parameters for each data point provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output.

  11. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  12. Rainfall threshold definition using an entropy decision approach and radar data

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Ridolfi, E.; Russo, F.; Napolitano, F.

    2011-07-01

    Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.

  13. Radioactivity level in Chinese building ceramic tile.

    PubMed

    Xinwei, L

    2004-01-01

    The activity concentrations of (226)Ra, (232)Th and (40)K have been determined by gamma ray spectrometry. The concentrations of (226)Ra, (232)Th and (40)K range from 158.3 to 1087.6, 91.7 to 1218.4, and 473.8 to 1031.3 Bq kg(-1) for glaze, and from 63.5 to 131.4, 55.4 to 106.5, and 386.7 to 866.8 Bq kg(-1) for ceramic tile, respectively. The measured activity concentrations for these radionuclides were compared with the reported data of other countries and with the typical world values. The radium equivalent activities (Ra(eq)), external hazard index (H(ex)) and internal hazard index (H(in)) associated with the radionuclides were calculated. The Ra(eq) values of all ceramic tiles are lower than the limit of 370 Bq kg(-1). The values of H(ex) and H(in) calculated according to the Chinese criterion for ceramic tiles are less than unity. The Ra(eq) value for the glaze of glazed tile collected from some areas are >370 Bq kg(-1).

  14. Molecular weight between entanglements for κ- and ι-carrageenans in an ionic liquid.

    PubMed

    Horinaka, Jun-ichi; Urabayashi, Yuhei; Wang, Xiaochen; Takigawa, Toshikazu

    2014-08-01

    The molecular weight between entanglements (Me) for κ- and ι-carrageenans, sulfated galactans, was examined in concentrated solutions using an ionic liquid 1-butyl-3-methylimidazolium acetate as a solvent. The dynamic viscoelasticity data for the solutions measured at different temperatures were overlapped according to the time-temperature superposition principle, and the obtained master curves exhibited the flow and rubbery plateau zones, being typical of concentrated polymer solutions having entanglement coupling. The values of Me for κ- and ι-carrageenans in the solutions were determined from the plateau moduli. Then the values of Me in the molten state (Me,melt) estimated as a material constant to be 6.6×10(3) and 7.2×10(3), respectively. The close values of Me,melt for κ- and ι-carrageenans indicate that 4-sulfate group of ι-carrageenan are not so influential for the entanglement network. Compared with agarose, a non-sulfate galactan, carrageenans have larger values of average spacing between entanglements. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Bayesian model for fate and transport of polychlorinated biphenyl in upper Hudson River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinberg, L.J.; Reckhow, K.H.; Wolpert, R.L.

    1996-05-01

    Modelers of contaminant fate and transport in surface waters typically rely on literature values when selecting parameter values for mechanistic models. While the expert judgment with which these selections are made is valuable, the information contained in contaminant concentration measurements should not be ignored. In this full-scale Bayesian analysis of polychlorinated biphenyl (PCB) contamination in the upper Hudson River, these two sources of information are combined using Bayes` theorem. A simulation model for the fate and transport of the PCBs in the upper Hudson River forms the basis of the likelihood function while the prior density is developed from literaturemore » values. The method provides estimates for the anaerobic biodegradation half-life, aerobic biodegradation plus volatilization half-life, contaminated sediment depth, and resuspension velocity of 4,400 d, 3.2 d, 0.32 m, and 0.02 m/yr, respectively. These are significantly different than values obtained with more traditional methods, and are shown to produce better predictions than those methods when used in a cross-validation study.« less

  16. Relationships between rating-of-perceived-exertion- and heart-rate-derived internal training load in professional soccer players: a comparison of on-field integrated training sessions.

    PubMed

    Campos-Vazquez, Miguel Angel; Mendez-Villanueva, Alberto; Gonzalez-Jurado, Jose Antonio; León-Prados, Juan Antonio; Santalla, Alfredo; Suarez-Arrones, Luis

    2015-07-01

    To describe the internal training load (ITL) of common training sessions performed during a typical week and to determine the relationships between different indicators of ITL commonly employed in professional football (soccer). Session-rating-of-perceived-exertion TL (sRPE-TL) and heart-rate- (HR) derived measurements of ITL as Edwards TL and Stagno training impulses (TRIMPMOD) were used in 9 players during 3 periods of the season. The relationships between them were analyzed in different training sessions during a typical week: skill drills/circuit training + small-sided games (SCT+SSGs), ball-possession games+technical-tactical exercises (BPG+TTE), tactical training (TT), and prematch activation (PMa). HR values obtained during SCT+SSGs and BPG+TTE were substantially greater than those in the other 2 sessions, all the ITL markers and session duration were substantially greater in SCT+SSGs than in any other session, and all ITL measures in BPG+TTE were substantially greater than in TT and PMa sessions. Large relationships were found between HR>80% HRmax and HR>90% HRmax vs sRPE-TL during BPG+TTE and TT sessions (r=.61-.68). Very large relationships were found between Edwards TL and sRPE-TL and between TRIMPMOD and sRPE-TL in sessions with BPG+TTE and TT (r=.73-.87). Correlations between the different HR-based methods were always extremely large (r=.92-.98), and unclear correlations were observed for other relationships between variables. sRPE-TL provided variable-magnitude within-individual correlations with HR-derived measures of training intensity and load during different types of training sessions typically performed during a week in professional soccer. Caution should be applied when using RPE- or HR-derived measures of exercise intensity/load in soccer training interchangeably.

  17. Sensitivity of Beam Parameters to a Station C Solenoid Scan on Axis II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze, Martin E.

    Magnet scans are a standard technique for determining beam parameters in accelerators. Beam parameters are inferred from spot size measurements using a model of the beam optics. The sensitivity of the measured beam spot size to the beam parameters is investigated for typical DARHT Axis II beam energies and currents. In a typical S4 solenoid scan, the downstream transport is tuned to achieve a round beam at Station C with an envelope radius of about 1.5 cm with a very small divergence with S4 off. The typical beam energy and current are 16.0 MeV and 1.625 kA. Figures 1-3 showmore » the sensitivity of the bean size at Station C to the emittance, initial radius and initial angle respectively. To better understand the relative sensitivity of the beam size to the emittance, initial radius and initial angle, linear regressions were performed for each parameter as a function of the S4 setting. The results are shown in Figure 4. The measured slope was scaled to have a maximum value of 1 in order to present the relative sensitivities in a single plot. Figure 4 clearly shows the beam size at the minimum of the S4 scan is most sensitive to emittance and relatively insensitive to initial radius and angle as expected. The beam emittance is also very sensitive to the beam size of the converging beam and becomes insensitive to the beam size of the diverging beam. Measurements of the beam size of the diverging beam provide the greatest sensitivity to the initial beam radius and to a lesser extent the initial beam angle. The converging beam size is initially very sensitive to the emittance and initial angle at low S4 currents. As the S4 current is increased the sensitivity to the emittance remains strong while the sensitivity to the initial angle diminishes.« less

  18. Effective mitigation of debris flows at Lemon Dam, La Plata County, Colorado

    NASA Astrophysics Data System (ADS)

    deWolfe, Victor G.; Santi, Paul M.; Ey, J.; Gartner, Joseph E.

    2008-04-01

    To reduce the hazards from debris flows in drainage basins burned by wildfire, erosion control measures such as construction of check dams, installation of log erosion barriers (LEBs), and spreading of straw mulch and seed are common practice. After the 2002 Missionary Ridge Fire in southwest Colorado, these measures were implemented at Knight Canyon above Lemon Dam to protect the intake structures of the dam from being filled with sediment. Hillslope erosion protection measures included LEBs at concentrations of 220-620/ha (200-600% of typical densities), straw mulch was hand spread at concentrations up to 5.6 metric tons/hectare (125% of typical densities), and seeds were hand spread at 67-84 kg/ha (150% of typical values). The mulch was carefully crimped into the soil to keep it in place. In addition, 13 check dams and 3 debris racks were installed in the main drainage channel of the basin. The technical literature shows that each mitigation method working alone, or improperly constructed or applied, was inconsistent in its ability to reduce erosion and sedimentation. At Lemon Dam, however, these methods were effective in virtually eliminating sedimentation into the reservoir, which can be attributed to a number of factors: the density of application of each mitigation method, the enhancement of methods working in concert, the quality of installation, and rehabilitation of mitigation features to extend their useful life. The check dams effectively trapped the sediment mobilized during rainstorms, and only a few cubic meters of debris traveled downchannel, where it was intercepted by debris racks. Using a debris volume-prediction model developed for use in burned basins in the Western U.S., recorded rainfall events following the Missionary Ridge Fire should have produced a debris flow of approximately 10,000 m 3 at Knight Canyon. The mitigation measures, therefore, reduced the debris volume by several orders of magnitude. For comparison, rainstorm-induced debris flows occurred in two adjacent canyons at volumes within the range predicted by the model.

  19. Using, Seeing, Feeling, and Doing Absolute Value for Deeper Understanding

    ERIC Educational Resources Information Center

    Ponce, Gregorio A.

    2008-01-01

    Using sticky notes and number lines, a hands-on activity is shared that anchors initial student thinking about absolute value. The initial point of reference should help students successfully evaluate numeric problems involving absolute value. They should also be able to solve absolute value equations and inequalities that are typically found in…

  20. Probing for the Multiplicative Term in Modern Expectancy-Value Theory: A Latent Interaction Modeling Study

    ERIC Educational Resources Information Center

    Trautwein, Ulrich; Marsh, Herbert W.; Nagengast, Benjamin; Ludtke, Oliver; Nagy, Gabriel; Jonkmann, Kathrin

    2012-01-01

    In modern expectancy-value theory (EVT) in educational psychology, expectancy and value beliefs additively predict performance, persistence, and task choice. In contrast to earlier formulations of EVT, the multiplicative term Expectancy x Value in regression-type models typically plays no major role in educational psychology. The present study…

  1. Highly Doped Polycrystalline Silicon Microelectrodes Reduce Noise in Neuronal Recordings In Vivo

    PubMed Central

    Saha, Rajarshi; Jackson, Nathan; Patel, Chetan; Muthuswamy, Jit

    2013-01-01

    The aims of this study are to 1) experimentally validate for the first time the nonlinear current-potential characteristics of bulk doped polycrystalline silicon in the small amplitude voltage regimes (0–200 μV) and 2) test if noise amplitudes (0–15 μV) from single neuronal electrical recordings get selectively attenuated in doped polycrystalline silicon microelectrodes due to the above property. In highly doped polycrystalline silicon, bulk resistances of several hundred kilo-ohms were experimentally measured for voltages typical of noise amplitudes and 9–10 kΩ for voltages typical of neural signal amplitudes (>150–200 μV). Acute multiunit measurements and noise measurements were made in n = 6 and n = 8 anesthetized adult rats, respectively, using polycrystalline silicon and tungsten microelectrodes. There was no significant difference in the peak-to-peak amplitudes of action potentials recorded from either microelectrode (p > 0.10). However, noise power in the recordings from tungsten microelectrodes (26.36 ± 10.13 pW) was significantly higher (p < 0.001) than the corresponding value in polycrystalline silicon microelectrodes (7.49 ± 2.66 pW). We conclude that polycrystalline silicon microelectrodes result in selective attenuation of noise power in electrical recordings compared to tungsten microelectrodes. This reduction in noise compared to tungsten microelectrodes is likely due to the exponentially higher bulk resistances offered by highly doped bulk polycrystalline silicon in the range of voltages corresponding to noise in multiunit measurements. PMID:20667815

  2. Studies on the Effects of Certain Soil Properties on the Biodegradation of Oils Determined by the Manometric Respirometric Method

    PubMed Central

    Kaakinen, Juhani; Vähäoja, Pekka; Kuokkanen, Toivo; Roppola, Katri

    2007-01-01

    The biodegradability of certain biofuels was studied in the case of forest soils using the manometric respirometric technique, which was proved to be very suitable for untreated, fertilized as well as pH adjusted soils. Experiments carried out in infertile sandy forest soil gave a BOD/ThOD value of 45.1% for a typical model substance, that is, sodium benzoate after a period of 30 days and mineral addition improved the BOD/ThOD value to a value of 76.2%. Rapeseed oil-based chain oil almost did not biodegrade at all in 30 days in nonprocessed soil, and when pH was adjusted to 8.0, the BOD/ThOD value increased slightly to a value of 7.4%. Mineral addition improved the BOD/ThOD value on average to 43.2% after 30 days. The combined mineral addition and pH adjustment together increased the BOD/ThOD value to 75.8% in 30 days. The observations were similar with a rapeseed oil-based lubricating oil: after 30 days, the BOD/ThOD value increased from 5.9% to an average value of 51.9%, when the pH and mineral concentrations of the soil were optimized. The mineral addition and pH adjustment also improved the precision of the measurements significantly. PMID:18273392

  3. Determination of spin polarization using an unconventional iron superconductor

    DOE PAGES

    Gifford, J. A.; Chen, B. B.; Zhang, J.; ...

    2016-11-21

    Here, an unconventional iron superconductor, SmO 0.7F 0.3FeAs, has been utilized to determine the spin polarization and temperature dependence of a highly spin-polarized material, La 0.67Sr 0.33MnO 3, with Andreev reflection spectroscopy. The polarization value obtained is the same as that determined using a conventional superconductor Pb but the temperature dependence of the spin polarization can be measured up to 52 K, a temperature range, which is several times wider than that using a typical conventional superconductor. The result excludes spin-parallel triplet pairing in the iron superconductor.

  4. Aging in the three-dimensional random-field Ising model

    NASA Astrophysics Data System (ADS)

    von Ohr, Sebastian; Manssen, Markus; Hartmann, Alexander K.

    2017-07-01

    We studied the nonequilibrium aging behavior of the random-field Ising model in three dimensions for various values of the disorder strength. This allowed us to investigate how the aging behavior changes across the ferromagnetic-paramagnetic phase transition. We investigated a large system size of N =2563 spins and up to 108 Monte Carlo sweeps. To reach these necessary long simulation times, we employed an implementation running on Intel Xeon Phi coprocessors, reaching single-spin-flip times as short as 6 ps. We measured typical correlation functions in space and time to extract a growing length scale and corresponding exponents.

  5. Applications of DC-Self Bias in CCP Deposition Systems

    NASA Astrophysics Data System (ADS)

    Keil, D. L.; Augustyniak, E.; Sakiyama, Y.

    2013-09-01

    In many commercial CCP plasma process systems the DC-self bias is available as a reported process parameter. Since commercial systems typically limit the number of onboard diagnostics, there is great incentive to understand how DC-self bias can be expected to respond to various system perturbations. This work reviews and examines DC self bias changes in response to tool aging, chamber film accumulation and wafer processing. The diagnostic value of the DC self bias response to transient and various steady state current draw schemes are examined. Theoretical models and measured experimental results are compared and contrasted.

  6. Glaucoma diagnostic performance of GDxVCC and spectralis OCT on eyes with atypical retardation pattern.

    PubMed

    Hoesl, Laura Maria; Tornow, Ralf P; Schrems, Wolfgang A; Horn, Folkert K; Mardin, Christian Y; Kruse, Friedrich E; Juenemann, Anselm G M; Laemmer, Robert

    2013-01-01

    To investigate the impact of typical scan score (TSS) on discriminating glaucomatous and healthy eyes by scanning laser polarimetry and spectral domain optical coherence tomography (SD-OCT) in 32 peripapillary sectors. One hundred two glaucoma patients and 32 healthy controls underwent standard automated perimetry, 24-hour intraocular pressure profile, optic disc photography, GDxVCC, and SD-OCT measurements. For controls, only very typical scans (TSS=100) were accepted. Glaucoma patients were divided into 3 subgroups (very typical: TSS=100; typical: 99≥TSS≥80, atypical: TSS<80). Receiver operating characteristic curves were constructed for mean retinal nerve fiber layer values, sector data, and nerve fiber indicator (NFI). Sensitivity was estimated at ≥90% specificity to compare the discriminating ability of each imaging modality. For discrimination between healthy and glaucomatous eyes with very typical scans, the NFI and inferior sector analyses 26 to 27 demonstrated the highest sensitivity at ≥90% specificity in GDxVCC and SD-OCT, respectively. For the typical and atypical groups, sensitivity at ≥90% specificity decreased for all 32 peripapillary sectors on an average by 10.9% and 17.9% for GDxVCC and by 4.9% and 0.8% for SD-OCT. For GDxVCC, diagnostic performance of peripapillary sectors decreased with lower TSS, especially in temporosuperior and inferotemporal sectors (sensitivity at ≥90% specificity decreased by 55.3% and by 37.8% in the atypical group). Diagnostic accuracy is comparable for SD-OCT and GDxVCC if typical scans (TSS=100) are investigated. Decreasing TSS is associated with a decrease in diagnostic accuracy for discriminating healthy and glaucomatous eyes by scanning laser polarimetry. NFI is less influenced than the global or sector retinal nerve fiber layer thickness. The TSS score should be included in the standard printout. Diagnostic accuracy of SD-OCT is barely influenced by low TSS.

  7. Computation of the anharmonic orbits in two piecewise monotonic maps with a single discontinuity

    NASA Astrophysics Data System (ADS)

    Li, Yurong; Du, Zhengdong

    2017-02-01

    In this paper, the bifurcation values for two typical piecewise monotonic maps with a single discontinuity are computed. The variation of the parameter of those maps leads to a sequence of border-collision and period-doubling bifurcations, generating a sequence of anharmonic orbits on the boundary of chaos. The border-collision and period-doubling bifurcation values are computed by the word-lifting technique and the Maple fsolve function or the Newton-Raphson method, respectively. The scaling factors which measure the convergent rates of the bifurcation values and the width of the stable periodic windows, respectively, are investigated. We found that these scaling factors depend on the parameters of the maps, implying that they are not universal. Moreover, if one side of the maps is linear, our numerical results suggest that those quantities converge increasingly. In particular, for the linear-quadratic case, they converge to one of the Feigenbaum constants δ _F= 4.66920160\\cdots.

  8. Ag-graphene hybrid conductive ink for writing electronics.

    PubMed

    Xu, L Y; Yang, G Y; Jing, H Y; Wei, J; Han, Y D

    2014-02-07

    With the aim of preparing a method for the writing of electronics on paper by the use of common commercial rollerball pens loaded with conductive ink, hybrid conductive ink composed of Ag nanoparticles (15 wt%) and graphene-Ag composite nanosheets (0.15 wt%) formed by depositing Ag nanoparticles (∼10 nm) onto graphene sheets was prepared for the first time. Owing to the electrical pathway effect of graphene and the decreased contact resistance of graphene junctions by depositing Ag nanoparticles (NPs) onto graphene sheets, the concentration of Ag NPs was significantly reduced while maintaining high conductivity at a curing temperature of 100 ° C. A typical resistivity value measured was 1.9 × 10(-7) Ω m, which is 12 times the value for bulk silver. Even over thousands of bending cycles or rolling, the resistance values of writing tracks only increase slightly. The stability and flexibility of the writing circuits are good, demonstrating the promising future of this hybrid ink and direct writing method.

  9. Fluorescence imaging to quantify crop residue cover

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  10. Heat flow and geothermal potential of the East Mesa KGRA, Imperial Valley, California

    NASA Technical Reports Server (NTRS)

    Swanberg, C. A.

    1974-01-01

    The East Mesa KGRA (Known Geothermal Resource Area) is located in the southeast part of the Imperial Valley, California, and is roughly 150 kilometers square in areal extent. A new heat flow technique which utilizes temperature gradient measurements across best clays is presented and shown to be as accurate as conventional methods for the present study area. Utilizing the best clay gradient technique, over 70 heat flow determinations have been completed within and around the East Mesa KGRA. Background heat flow values range from 1.4 to 2.4 hfu (1 hfu = .000001 cal. per square centimeter-second) and are typical of those throughout the Basin and Range province. Heat flow values for the northwest lobe of the KGRA (Mesa anomaly) are as high as 7.9 hfu, with the highest values located near gravity and seismic noise maxima and electrical resistivity minima. An excellent correlation exists between heat flow contours and faults defined by remote sensing and microearthquake monitoring.

  11. [Changes in phytoperiphyton community during seasonal succession: influence of plankton sedimentation and grazing by phytophages--Chironomid larvae].

    PubMed

    Lukin, V B

    2002-01-01

    The investigation of seasonal changes in spatial structure of phytoperiphyton during succession was conducted at the lower reaches of Akulovsky water channel from April to August 2000. At the beginning of succession from April to June dominant forms were chain-forming diatoms and filamentous green algae, sedimented from plankton. Later, at the middle of June under increasing pressure of herbivorous, they were replaced by stretched unicellular diatoms and colonial cyanobacteria. In late June-August, when herbivorous predation was the most intensive, the relative abundance of typical periphytonic forms decreased while that of settled planktonic forms increased. The effect of planktonic algae sedimentation on periphyton composition was evaluated as similarity between phytoperiphyton and phytoplankton communities measured with Chekanovski--Sorensen index. The value of this index tends to decrease with the development of periphyton while showing some relation to intensity of herbivorous pressure. Minimal values of Chekanovski--Sorensen index were under moderate herbivorous density, whereas maximal values were observed in periods of extremely high or low herbivorous density.

  12. Quartz crystal resonator g sensitivity measurement methods and recent results.

    PubMed

    Driscoll, M M

    1990-01-01

    A technique for accurate measurements of quartz crystal resonator vibration sensitivity is described. The technique utilizes a crystal oscillator circuit in which a prescribed length of coaxial cable is used to connect the resonator to the oscillator sustaining stage. A method is provided for determination and removal of measurement errors normally introduced as a result of cable vibration. In addition to oscillator-type measurements, it is also possible to perform similar vibration sensitivity measurements using a synthesized signal generator with the resonator installed in a passive phase bridge. Test results are reported for 40 and 50 MHz, fifth overtone AT-cut, and third overtone SC-cut crystals. Acceleration sensitivity (gamma vector) values for the SC-cut resonators were typically four times smaller (5x10(-10) per g) than for the AT-cut units. However, smaller unit-to-unit gamma vector magnitude variation was exhibited by the AT-cut resonators. Oscillator sustaining stage vibration sensitivity was characterized by an equivalent open-loop phase modulation of 10(-6) rad/g.

  13. Determination of the force constant of a single-beam gradient trap by measurement of backscattered light

    NASA Astrophysics Data System (ADS)

    Friese, M. E. J.; Rubinsztein-Dunlop, H.; Heckenberg, N. R.; Dearden, E. W.

    1996-12-01

    A single-beam gradient trap could potentially be used to hold a stylus for scanning force microscopy. With a view to development of this technique, we modeled the optical trap as a harmonic oscillator and therefore characterized it by its force constant. We measured force constants and resonant frequencies for 1 4- m-diameter polystyrene spheres in a single-beam gradient trap using measurements of backscattered light. Force constants were determined with both Gaussian and doughnut laser modes, with powers of 3 and 1 mW, respectively. Typical values for spring constants were measured to be between 10 6 and 4 10 6 N m. The resonant frequencies of trapped particles were measured to be between 1 and 10 kHz, and the rms amplitudes of oscillations were estimated to be around 40 nm. Our results confirm that the use of the doughnut mode for single-beam trapping is more efficient in the axial direction.

  14. [Chromophoric dissolved organic matter absorption characteristics with relation to fluorescence in typical macrophyte, algae lake zones of Lake Taihu].

    PubMed

    Zhang, Yun-lin; Qin, Bo-qiang; Ma, Rong-hua; Zhu, Guang-wei; Zhang, Lu; Chen, Wei-min

    2005-03-01

    Chromophoric dissolved organic matter (CDOM) represents one of the primary light-absorbing species in natural waters and plays a critical in determining the aquatic light field. CDOM shows a featureless absorption spectrum that increases exponentially with decreasing wavelength, which limits the penetration of biologically damaging UV-B radiation (wavelength from 280 to 320 nm) in the water column, thus shielding aquatic organisms. CDOM absorption measurements and their relationship with dissolved organic carbon (DOC), and fluorescence are presented in typical macrophyte and algae lake zone of Lake Taihu based on a field investigation in April in 2004 and lab analysis. Absorption spectral of CDOM was measured from 240 to 800 nm using a Shimadzu UV-2401PC UV-Vis recording spectrophotometer. Fluorescence with an excitation wavelength of 355 nm, an emission wavelength of 450 nm is measured using a Shimadzu 5301 spectrofluorometer. Concentrations of DOC ranged from 6.3 to 17.2 mg/L with an average of 9.08 +/- 2.66 mg/L. CDOM absorption coefficients at 280 nm and 355 nm were in the range of 11.2 - 32.6 m(-1) (average 17.46m(-1) +/- 5.75 m(-1) and 2.4 - 8.3 m(-1) (average 4.17m(-1) +/- 1.47 m(-l)), respectively. The values of the DOC-specific absorption coefficient at 355 nm ranged from 0.31 to 0.64 L x (mg x m)-1. Fluorescence emission at 450 nm, excited at 355 nm, had a mean value of 1.32nm(-1) +/- 0.84 nm(-1). A significant lake zone difference is found in DOC concentration, CDOM absorption coefficient and fluorescence, but not in DOC-specific absorption coefficient and spectral slope coefficient. This regional distribution pattern is in agreement with the location of sources of yellow substance: highest concentrations close to river mouth under the influence of river inflow, lower values in East Lake Taihu. The values of algae lake zone are obvious larger than those of macrophyte lake zone. In Meiliang Bay, CDOM absorption, DOC concentration and fluorescence tend to decreasing from inside to mouth of the Bay. The results show a good correlation between CDOM absorption and DOC coefficients during 280 - 500 nm short wavelength intervals. The R-square coefficient between CDOM absorption and DOC concentration decreases with the increase of wavelength from 280 to 500 nm. The significant linear regression correlations between fluorescence, DOC concentration and absorption coefficients were found at 355 nm. The exponential slope coefficients ranged from 13.0 to 16.4 microm(-1) with a mean value 14.37microm(-1) +/- 0.73microm(-1), 17.3microm(-1) - 20.3microm(-1) with a mean value 19.17microm(-1) +/- 0.84microm(-1) and 12.0microm(-1) - 15.8microm(-1) with a mean value 13.38microm(-1) +/- 0.82microm(-1) over the 280 - 500 nm, 280 - 360 nm and 360 - 440 nm intervals.

  15. Evaluation of electrical conductivity of Cu and Al through sub microsecond underwater electrical wire explosion

    NASA Astrophysics Data System (ADS)

    Sheftman, D.; Shafer, D.; Efimov, S.; Krasik, Ya. E.

    2012-03-01

    Sub-microsecond timescale underwater electrical wire explosions using Cu and Al materials have been conducted. Current and voltage waveforms and time-resolved streak images of the discharge channel, coupled to 1D magneto-hydrodynamic simulations, have been used to determine the electrical conductivity of the metals for the range of conditions between hot liquid metal and strongly coupled non-ideal plasma, in the temperature range of 10-60 KK. The results of these studies showed that the conductivity values obtained are typically lower than those corresponding to modern theoretical electrical conductivity models and provide a transition between the conductivity values obtained in microsecond time scale explosions and those obtained in nanosecond time scale wire explosions. In addition, the measured wire expansion shows good agreement with equation of state tables.

  16. Re-Evaluation of the AASHTO-Flexible Pavement Design Equation with Neural Network Modeling

    PubMed Central

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance. PMID:25397962

  17. Re-evaluation of the AASHTO-flexible pavement design equation with neural network modeling.

    PubMed

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance.

  18. Unlocking echocardiogram measurements for heart disease research through natural language processing.

    PubMed

    Patterson, Olga V; Freiberg, Matthew S; Skanderson, Melissa; J Fodeh, Samah; Brandt, Cynthia A; DuVall, Scott L

    2017-06-12

    In order to investigate the mechanisms of cardiovascular disease in HIV infected and uninfected patients, an analysis of echocardiogram reports is required for a large longitudinal multi-center study. A natural language processing system using a dictionary lookup, rules, and patterns was developed to extract heart function measurements that are typically recorded in echocardiogram reports as measurement-value pairs. Curated semantic bootstrapping was used to create a custom dictionary that extends existing terminologies based on terms that actually appear in the medical record. A novel disambiguation method based on semantic constraints was created to identify and discard erroneous alternative definitions of the measurement terms. The system was built utilizing a scalable framework, making it available for processing large datasets. The system was developed for and validated on notes from three sources: general clinic notes, echocardiogram reports, and radiology reports. The system achieved F-scores of 0.872, 0.844, and 0.877 with precision of 0.936, 0.982, and 0.969 for each dataset respectively averaged across all extracted values. Left ventricular ejection fraction (LVEF) is the most frequently extracted measurement. The precision of extraction of the LVEF measure ranged from 0.968 to 1.0 across different document types. This system illustrates the feasibility and effectiveness of a large-scale information extraction on clinical data. New clinical questions can be addressed in the domain of heart failure using retrospective clinical data analysis because key heart function measurements can be successfully extracted using natural language processing.

  19. Sampling factors influencing accuracy of sperm kinematic analysis.

    PubMed

    Owen, D H; Katz, D F

    1993-01-01

    Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings thus suggest that computer-aided sperm analysis (CASA) application at 60 frames per second will significantly improve the accuracy of kinematic analysis in most applications to human and other mammalian sperm.

  20. Near-Infrared (0.67-4.7 microns) Optical Constants Estimated for Montmorillonite

    NASA Technical Reports Server (NTRS)

    Roush, T. L.

    2005-01-01

    Various models of the reflectance from particulate surfaces are used for interpretation of remote sensing data of solar system objects. These models rely upon the real (n) and imaginary (k) refractive indices of the materials. Such values are limited for commonly encountered silicates at visual and near-infrared wavelengths (lambda, 0.4-5 microns). Availability of optical constants for candidate materials allows more thorough modeling of the observations obtained by Earth-based telescopes and spacecraft. Two approaches for determining the absorption coefficient (alpha=2pik/lambda) from reflectance measurements of particulates have been described; one relies upon Kubelka-Munk theory and the other Hapke theory. Both have been applied to estimate alpha and k for various materials. Neither enables determination of the wavelength dependence of n, n=f(lambda). Thus, a mechanism providing this ability is desirable. Using Hapke-theory to estimate k from reflectance measurements requires two additional quantities be known or assumed: 1) n=f(lambda) and 2) d, the sample particle diameter. Typically n is assumed constant (c) or modestly varying with lambda; referred to here as n(sub 0). Assuming n(sub 0), at each lambda an estimate of k is used to calculate the reflectance and is iteratively adjusted until the difference between the model and measured reflectance is minimized. The estimated k's (k(sub 1)) are the final results, and this concludes the typical analysis.

  1. Accuracy of the HumaSensplus point-of-care uric acid meter using capillary blood obtained by fingertip puncture.

    PubMed

    Fabre, Stéphanie; Clerson, Pierre; Launay, Jean-Marie; Gautier, Jean-François; Vidal-Trecan, Tiphaine; Riveline, Jean-Pierre; Platt, Adam; Abrahamsson, Anna; Miner, Jeffrey N; Hughes, Glen; Richette, Pascal; Bardin, Thomas

    2018-05-02

    The uric acid (UA) level in patients with gout is a key factor in disease management and is typically measured in the laboratory using plasma samples obtained after venous puncture. This study aimed to assess the reliability of immediate UA measurement with capillary blood samples obtained by fingertip puncture with the HumaSens plus point-of-care meter. UA levels were measured using both the HumaSens plus meter in the clinic and the routine plasma UA method in the biochemistry laboratory of 238 consenting diabetic patients. HumaSens plus capillary and routine plasma UA measurements were compared by linear regression, Bland-Altman plots, intraclass correlation coefficient (ICC), and Lin's concordance coefficient. Values outside the dynamic range of the meter, low (LO) or high (HI), were analyzed separately. The best capillary UA thresholds for detecting hyperuricemia were determined by receiver operating characteristic (ROC) curves. The impact of potential confounding factors (demographic and biological parameters/treatments) was assessed. Capillary and routine plasma UA levels were compared to reference plasma UA measurements by liquid chromatography-mass spectrometry (LC-MS) for a subgroup of 67 patients. In total, 205 patients had capillary and routine plasma UA measurements available. ICC was 0.90 (95% confidence interval (CI) 0.87-0.92), Lin's coefficient was 0.91 (0.88-0.93), and the Bland-Altman plot showed good agreement over all tested values. Overall, 17 patients showed values outside the dynamic range. LO values were concordant with plasma values, but HI values were considered uninterpretable. Capillary UA thresholds of 299 and 340 μmol/l gave the best results for detecting hyperuricemia (corresponding to routine plasma UA thresholds of 300 and 360 μmol/l, respectively). No significant confounding factor was found among those tested, except for hematocrit; however, this had a negligible influence on the assay reliability. When capillary and routine plasma results were discordant, comparison with LC-MS measurements showed that plasma measurements had better concordance: capillary UA, ICC 0.84 (95% CI 0.75-0.90), Lin's coefficient 0.84 (0.77-0.91); plasma UA, ICC 0.96 (0.94-0.98), Lin's coefficient 0.96 (0.94-0.98). UA measurements with the HumaSens plus meter were reasonably comparable with those of the laboratory assay. The meter is easy to use and may be useful in the clinic and in epidemiologic studies.

  2. Development of a direct procedure for the measurement of sulfur isotope variability in beers by MC-ICP-MS.

    PubMed

    Giner Martínez-Sierra, J; Santamaria-Fernandez, R; Hearn, R; Marchante Gayón, J M; García Alonso, J I

    2010-04-14

    In this work, a multi-collector inductively coupled plasma mass spectrometer (MC-ICP-MS) was evaluated for the direct measurement of sulfur stable isotope ratios in beers as a first step toward a general study of the natural isotope variability of sulfur in foods and beverages. Sample preparation consisted of a simple dilution of the beers with 1% (v/v) HNO(3). It was observed that different sulfur isotope ratios were obtained for different dilutions of the same sample indicating that matrix effects affected differently the transmission of the sulfur ions at masses 32, 33, and 34 in the mass spectrometer. Correction for mass bias related matrix effects was evaluated using silicon internal standardization. For that purpose, silicon isotopes at masses 29 and 30 were included in the sulfur cup configuration and the natural silicon content in beers used for internal mass bias correction. It was observed that matrix effects on differential ion transmission could be corrected adequately using silicon internal standardization. The natural isotope variability of sulfur has been evaluated by measuring 26 different beer brands. Measured delta(34)S values ranged from -0.2 to 13.8 per thousand. Typical combined standard uncertainties of the measured delta(34)S values were < or = 2 per thousand. The method has therefore great potential to study sulfur isotope variability in foods and beverages.

  3. Automated simultaneous measurement of the δ(13) C and δ(2) H values of methane and the δ(13) C and δ(18) O values of carbon dioxide in flask air samples using a new multi cryo-trap/gas chromatography/isotope ratio mass spectrometry system.

    PubMed

    Brand, Willi A; Rothe, Michael; Sperlich, Peter; Strube, Martin; Wendeberg, Magnus

    2016-07-15

    The isotopic composition of greenhouse gases helps to constrain global budgets and to study sink and source processes. We present a new system for high-precision stable isotope measurements of carbon, hydrogen and oxygen in atmospheric methane and carbon dioxide. The design is intended for analyzing flask air samples from existing sampling programs without the need for extra sample air for methane analysis. CO2 and CH4 isotopes are measured simultaneously using two isotope ratio mass spectrometers, one for the analysis of δ(13) C and δ(18) O values and the second one for δ(2) H values. The inlet carousel delivers air from 16 sample positions (glass flasks 1-5 L and high-pressure cylinders). Three 10-port valves take aliquots from the sample stream. CH4 from 100-mL air aliquots is preconcentrated in 0.8-mL sample loops using a new cryo-trap system. A precisely calibrated working reference air is used in parallel with the sample according to the Principle of Identical Treatment. It takes about 36 hours for a fully calibrated analysis of a complete carousel including extractions of four working reference and one quality control reference air. Long-term precision values, as obtained from the quality control reference gas since 2012, account for 0.04 ‰ (δ(13) C values of CO2 ), 0.07 ‰ (δ(18) O values of CO2 ), 0.11 ‰ (δ(13) C values of CH4 ) and 1.0 ‰ (δ(2) H values of CH4 ). Within a single day, the system exhibits a typical methane δ(13) C standard deviation (1σ) of 0.06 ‰ for 10 repeated measurements. The system has been in routine operation at the MPI-BGC since 2012. Consistency of the data and compatibility with results from other laboratories at a high precision level are of utmost importance. A high sample throughput and reliability of operation are important achievements of the presented system to cope with the large number of air samples to be analyzed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Characterization of particle emission from laser printers.

    PubMed

    Scungio, Mauro; Vitanza, Tania; Stabile, Luca; Buonanno, Giorgio; Morawska, Lidia

    2017-05-15

    Emission of particles from laser printers in office environments is claimed to have impact on human health due to likelihood of exposure to high particle concentrations in such indoor environments. In the present paper, particle emission characteristics of 110 laser printers from different manufacturers were analyzed, and estimations of their emission rates were made on the basis of measurements of total concentrations of particles emitted by the printers placed in a chamber, as well as particle size distributions. The emission rates in terms of number, surface area and mass were found to be within the ranges from 3.39×10 8 partmin -1 to 1.61×10 12 partmin -1 , 1.06×10 0 mm 2 min -1 to 1.46×10 3 mm 2 min -1 and 1.32×10 -1 μgmin -1 to 1.23×10 2 μgmin -1 , respectively, while the median mode value of the emitted particles was found equal to 34nm. In addition, the effect of laser printing emissions in terms of employees' exposure in offices was evaluated on the basis of the emission rates, by calculating the daily surface area doses (as sum of alveolar and tracheobronchial deposition fraction) received assuming a typical printing scenario. In such typical printing conditions, a relatively low total surface area dose (2.7mm 2 ) was estimated for office employees with respect to other indoor microenvironments including both workplaces and homes. Nonetheless, for severe exposure conditions, characterized by operating parameters falling beyond the typical values (i.e. smaller office, lower ventilation, printer located on the desk, closer to the person, higher printing frequency etc.), significantly higher doses are expected. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Power spectrum analysis with least-squares fitting: amplitude bias and its elimination, with application to optical tweezers and atomic force microscope cantilevers.

    PubMed

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2010-07-01

    Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.

  6. Differentiation of wood-derived vanillin from synthetic vanillin in distillates using gas chromatography/combustion/isotope ratio mass spectrometry for δ13 C analysis.

    PubMed

    van Leeuwen, Katryna A; Prenzler, Paul D; Ryan, Danielle; Paolini, Mauro; Camin, Federica

    2018-02-28

    Typical storage in oak barrels releases in distillates different degradation products such as vanillin, which play an important role in flavour and aroma. The addition of vanillin, as well as other aroma compounds, of different origin is prohibited by European laws. As vanillin samples from different sources have different δ 13 C values, the δ 13 C value could be used to determine whether the vanillin is authentic (lignin-derived), or if it has been added from another source (e.g. synthetic). The δ 13 C values for vanillin derived from different sources, including natural, synthetic and tannins, were measured by gas chromatography/combustion/isotope ratio mass spectrometry (GC/C/IRMS), after diethyl ether addition and/or ethanol dilution. A method for analysing vanillin in distillates after dichloromethane extraction was developed. Tests were undertaken to prove the reliability, reproducibility and accuracy of the method with standards and samples. Distillate samples were run to measure the δ 13 C values of vanillin and to compare them with values for other sources of vanillin. δ 13 C values were determined for: natural vanillin extracts (-21.0 to -19.3‰, 16 samples); vanillin ex-lignin (-28.2‰, 1 sample); and synthetic vanillin (-32.6 to -29.3‰, 7 samples). Seventeen tannin samples were found to have δ 13 C values of -29.5 to -26.7‰, which were significantly different (p < 0.05) from those of the natural and synthetic vanillins. The vanillin δ 13 C values measured in distillates (-28.9 to -25.7‰) were mainly in the tannin range, although one spirit (-32.5‰) was found to contain synthetic vanillin. The results show that synthetic vanillin added to a distillate could be differentiated from vanillin derived from oak barrels by their respective δ 13 C values. The GC/C/IRMS method could be a useful tool in the determination of adulteration of distillates. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Forestland social values and open space preservation.

    Treesearch

    Jeffrey D. Kline; Ralph J. Alig; Brian Garber-Yonts

    2004-01-01

    Concerns have grown about the loss of forestland to development, leading to both public and private efforts to preserve forestland as open space. These lands comprise social values-ecological, scenic, recreation, and resource protection values-not typically reflected in market prices for land. When these values are present, it is up to public and private agencies to...

  8. Higher Education and the Transmission of Educational Values in Today's Society.

    ERIC Educational Resources Information Center

    Escobar-Ortloff, Luz; Ortloff, Warren G.

    Education has traditionally been the primary method of passing on a society's culture and the values it considers to be important. Higher education institutions have not been immune to the crises in the transmission of values. Typically, in higher education basic intellectual values and virtues are mostly left for students to pick up through…

  9. Commissioning of intensity modulated neutron radiotherapy (IMNRT).

    PubMed

    Burmeister, Jay; Spink, Robyn; Liang, Liang; Bossenberger, Todd; Halford, Robert; Brandon, John; Delauter, Jonathan; Snyder, Michael

    2013-02-01

    Intensity modulated neutron radiotherapy (IMNRT) has been developed using inhouse treatment planning and delivery systems at the Karmanos Cancer Center∕Wayne State University Fast Neutron Therapy facility. The process of commissioning IMNRT for clinical use is presented here. Results of commissioning tests are provided including validation measurements using representative patient plans as well as those from the TG-119 test suite. IMNRT plans were created using the Varian Eclipse optimization algorithm and an inhouse planning system for calculation of neutron dose distributions. Tissue equivalent ionization chambers and an ionization chamber array were used for point dose and planar dose distribution comparisons with calculated values. Validation plans were delivered to water and virtual water phantoms using TG-119 measurement points and evaluation techniques. Photon and neutron doses were evaluated both inside and outside the target volume for a typical IMNRT plan to determine effects of intensity modulation on the photon dose component. Monitor unit linearity and effects of beam current and gantry angle on output were investigated, and an independent validation of neutron dosimetry was obtained. While IMNRT plan quality is superior to conventional fast neutron therapy plans for clinical sites such as prostate and head and neck, it is inferior to photon IMRT for most TG-119 planning goals, particularly for complex cases. This results significantly from current limitations on the number of segments. Measured and calculated doses for 11 representative plans (six prostate∕five head and neck) agreed to within -0.8 ± 1.4% and 5.0 ± 6.0% within and outside the target, respectively. Nearly all (22∕24) ion chamber point measurements in the two phantom arrangements were within the respective confidence intervals for the quantity [(measured-planned)∕prescription dose] derived in TG-119. Mean differences for all measurements were 0.5% (max = 7.0%) and 1.4% (max = 4.1%) in water and virtual water, respectively. The mean gamma pass rate for all cases was 92.8% (min = 88.6%). These pass rates are lower than typically achieved with photon IMRT, warranting development of a planar dosimetry system designed specifically for IMNRT and∕or the improvement of neutron beam modeling in the penumbral region. The fractional photon dose component did not change significantly in a typical IMNRT plan versus a conventional fast neutron therapy plan, and IMNRT delivery is not expected to significantly alter the RBE. All other commissioning results were considered satisfactory for clinical implementation of IMNRT, including the external neutron dose validation, which agreed with the predicted neutron dose to within 1%. IMNRT has been successfully commissioned for clinical use. While current plan quality is inferior to photon IMRT, it is superior to conventional fast neutron therapy. Ion chamber validation results for IMNRT commissioning are also comparable to those typically achieved with photon IMRT. Gamma pass rates for planar dose distributions are lower than typically observed for photon IMRT but may be improved with improved planar dosimetry equipment and beam modeling techniques. In the meantime, patient-specific quality assurance measurements should rely more heavily on point dose measurements with tissue equivalent ionization chambers. No significant technical impediments are anticipated in the clinical implementation of IMNRT as described here.

  10. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    PubMed

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.

  11. The 500-year temperature and precipitation fluctuations in the Czech Lands derived from documentary evidence and instrumental measurements

    NASA Astrophysics Data System (ADS)

    Dobrovolný, Petr; Brázdil, Rudolf; Kotyza, Oldřich; Valášek, Hubert

    2010-05-01

    Series of temperature and precipitation indices (in ordinal scale) based on interpretation of various sources of documentary evidence (e.g. narrative written reports, visual daily weather records, personal correspondence, special prints, official economic records, etc.) are used as predictors in the reconstruction of mean seasonal temperatures and seasonal precipitation totals for the Czech Lands from A.D. 1500. Long instrumental measurements from 1771 (temperatures) and 1805 (precipitation) are used as a target values to calibrate and verify documentary-based index series. Reconstruction is based on linear regression with variance and mean adjustments. Reconstructed series were compared with similar European documentary-based reconstructions as well as with reconstructions based on different natural proxies. Reconstructed series were analyzed with respect to trends on different time-scales and occurrence of extreme values. We discuss uncertainties typical for documentary evidence from historical archives. Besides the fact that reports on weather and climate in documentary archives cover all seasons, our reconstructions provide the best results for winter temperatures and summer precipitation. However, explained variance for these seasons is comparable to other existing reconstructions for Central Europe.

  12. Reducible or irreducible? Mathematical reasoning and the ontological method.

    PubMed

    Fisher, William P

    2010-01-01

    Science is often described as nothing but the practice of measurement. This perspective follows from longstanding respect for the roles mathematics and quantification have played as media through which alternative hypotheses are evaluated and experience becomes better managed. Many figures in the history of science and psychology have contributed to what has been called the "quantitative imperative," the demand that fields of study employ number and mathematics even when they do not constitute the language in which investigators think together. But what makes an area of study scientific is, of course, not the mere use of number, but communities of investigators who share common mathematical languages for exchanging quantitative and quantitative value. Such languages require rigorous theoretical underpinning, a basis in data sufficient to the task, and instruments traceable to reference standard quantitative metrics. The values shared and exchanged by such communities typically involve the application of mathematical models that specify the sufficient and invariant relationships necessary for rigorous theorizing and instrument equating. The mathematical metaphysics of science are explored with the aim of connecting principles of quantitative measurement with the structures of sufficient reason.

  13. Flight measured and calculated exhaust jet conditions for an F100 engine in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Hernandez, Francisco J.; Burcham, Frank W., Jr.

    1988-01-01

    The exhaust jet conditions, in terms of temperature and Mach number, were determined for a nozzle-aft end acoustic study flown on an F-15 aircraft. Jet properties for the F100 EMD engines were calculated using the engine manufacturer's specification deck. The effects of atmospheric temperature on jet Mach number, M10, were calculated. Values of turbine discharge pressure, PT6M, jet Mach number, and jet temperature were calculated as a function of aircraft Mach number, altitude, and power lever angle for the test day conditions. At a typical test point with a Mach number of 0.9, intermediate power setting, and an altitude of 20,000 ft, M10 was equal to 1.63. Flight measured and calculated values of PT6M were compared for intermediate power at altitudes of 15500, 20500, and 31000 ft. It was found that at 31000 ft, there was excellent agreement between both, but for lower altitudes the specification deck overpredicted the flight data. The calculated jet Mach numbers were believed to be accurate to within 2 percent.

  14. Hydrologic response to valley-scale structure in alpine headwaters

    USGS Publications Warehouse

    Weekes, Anne A.; Torgersen, Christian E.; Montgomery, David R.; Woodward, Andrea; Bolton, Susan M.

    2015-01-01

    To better evaluate potential differences in streamflow response among basins with extensive coarse depositional features and those without, we examined the relationships between streamflow discharge, stable isotopes, water temperature and the amplitude of the diurnal signal at five basin outlets. We also quantified the percentages of colluvial channel length measured along the stepped longitudinal profile. Colluvial channels, characterized by the presence of surficial, coarse-grained depositional features, presented sediment-rich, transport-limited morphologies that appeared to have a cumulative effect on the timing and volume of flow downstream. Measurements taken from colluvial channels flowing through depositional landforms showed median recession constants (Kr) of 0.9-0.95, δ18O values of ≥−14.5 and summer diurnal amplitudes ≤0.8 as compared with more typical surface water recession constant values of 0.7, δ18O ≤ −13.5 and diurnal amplitudes >2.0. Our results demonstrated strong associations between the percentage of colluvial channel length within a catchment and moderated streamflow regimes, water temperatures, diurnal signals and depleted δ18O related to groundwater influx.

  15. Performance characterization and transient investigation of multipropellant resistojets

    NASA Technical Reports Server (NTRS)

    Braunscheidel, Edward P.

    1989-01-01

    The multipropellant resistojet thruster design initially was characterized for performance in a vacuum tank using argon, carbon dioxide, nitrogen, and hydrogen, with gas inlet pressures ranging from 13.7 to 310 kPa (2 to 45 psia) over a heat exchanger temperature range of ambient to 1200 C (2200 F). Specific impulse, the measure of performance, had values ranging from 120 to 600 seconds for argon and hydrogen respectively, with a constant heat exchanger temperature of 1200 C (2200 F). When operated under ambient conditions typical specific impulse values obtained for argon and hydrogen ranged from 55 to 290 seconds, respectively. Performance measured with several mixtures of argon and nitrogen showed no significant deviation from predictions obtained by directly weighting the argon and nitrogen individual performance results. Another aspect of the program investigating transient behavior, showed responses depended heavily on the start-up scenario used. Steady state heater temperatures were achieved in 20 to 75 minutes for argon, and in 10 to 90 minutes for hydrogen. Steady state specific impulses were achieved in 25 to 60, and 20 to 60 minutes respectively.

  16. SAGE aerosol measurements. Volume 1: February 21, 1979 to December 31, 1979

    NASA Technical Reports Server (NTRS)

    Mccormick, M. P.

    1985-01-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) satellite system, launched on February 18, 1979, provides profiles of aerosol extinction, ozone concentration, and nitrogen dioxide concentration between about 80 N and 80 S. Zonal averages, separated into sunrise and sunset events, and seasonal averages of the aerosol extinction at 1.00 microns and 0.45 microns ratios of the aerosol extinction to the molecular extinction at 1.00 microns, and ratios of the aerosol extinction at 0.45 microns to the aerosol extinction at 1.00 microns are given. The averages for 1979 are shown in tables and in profile and contour plots (as a function of altitude and latitude). In addition, temperature data provided by the National Oceanic and Atmospheric Administration (NOAA) for the time and location of each SAGE measurement are averaged and shown in a similar format. Typical values of the peak aerosol extinction were 0.0001 to 0.0002 km at 1.00 microns depth values for the 1.00 microns channel varied between 0.001 and 0.002 over all latitudes.

  17. Heat flow and heat generation in greenstone belts

    NASA Technical Reports Server (NTRS)

    Drury, M. J.

    1986-01-01

    Heat flow has been measured in Precambrian shields in both greenstone belts and crystalline terrains. Values are generally low, reflecting the great age and tectonic stability of the shields; they range typically between 30 and 50 mW/sq m, although extreme values of 18 and 79 mW/sq m have been reported. For large areas of the Earth's surface that are assumed to have been subjected to a common thermotectonic event, plots of heat flow against heat generation appear to be linear, although there may be considerable scatter in the data. The relationship is expressed as: Q = Q sub o + D A sub o in which Q is the observed heat flow, A sub o is the measured heat generation at the surface, Q sub o is the reduced heat flow from the lower crust and mantle, and D, which has the dimension of length, represents a scale depth for the distribution of radiogenic elements. Most authors have not used data from greenstone belts in attempting to define the relationship within shields, considering them unrepresentative and preferring to use data from relatively homogeneous crystalline rocks. A discussion follows.

  18. Convection measurement package for space processing sounding rocket flights. [low gravity manufacturing - fluid dynamics

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.

    1975-01-01

    The effects on heated fluids of nonconstant accelerations, rocket vibrations, and spin rates, was studied. A system is discussed which can determine the influence of the convective effects on fluid experiments. The general suitability of sounding rockets for performing these experiments is treated. An analytical investigation of convection in an enclosure which is heated in low gravity is examined. The gravitational body force was taken as a time-varying function using anticipated sounding rocket accelerations, since accelerometer flight data were not available. A computer program was used to calculate the flow rates and heat transfer in fluids with geometries and boundary conditions typical of space processing configurations. Results of the analytical investigation identify the configurations, fluids and boundary values which are most suitable for measuring the convective environment of sounding rockets. A short description of fabricated fluid cells and the convection measurement package is given. Photographs are included.

  19. Study of the possibilities of more rational use of energy in the sector of trade and commerce, part 2

    NASA Astrophysics Data System (ADS)

    Ebersbach, K. F.; Fischer, A.; Layer, G.; Steinberger, W.; Wegner, M.; Wiesner, B.

    1982-07-01

    The energy demand in the sector of trade and commerce was registered and analyzed. Measures to improve the energy demand structure are presented. In several typical firms like hotels, office buildings, locksmith's shops, motor vehicle repair shops, butcher's shops, laundries and bakeries, detailed surveys of energy consumption were done and included in a statistic evaluation. Subjects analyzed were: development of the energy supply; technology of energy application; final energy demand broken down into demand for light, power, space heating and process heat as well as the demand for cooling; daily and annual load curves of energy consumption and their dependence on various parameters; and measures to improve the structure of energy demand. Detailed measurement points out negligences in the surveyed firms and shows possibilities for likely energy savings. In addition, standard values for specific energy consumption are obtained.

  20. Study of the possibilities of more rational use of energy in the sector of trade and commerce, part 1

    NASA Astrophysics Data System (ADS)

    Ebersbach, K. F.; Fischer, A.; Layer, G.; Steinberger, W.; Wegner, M.; Wiesner, B.

    1982-06-01

    The energy demand in trade and commerce was analyzed. Measures to improve the energy demand structure are presented. In several typical firms, like hotels, office buildings, locksmith's shops, motor vehicle repair shops, butcher's shops, laundries and bakeries, energy consumption was surveyed and statistically evaluated. Subjects analyzed are: the development of the energy supply; the technology of energy application; the final energy demand broken down into demand for light, power, space heating and process heat as well as the demand for cooling; the daily and annual load curve of energy consumption and its dependence on various parameters; and measures to improve the structure of energy demand. The detailed measurement points out negligences in the surveyed firms and shows some possibilities for likely energy savings. In addition, standard values for specific energy consumption are obtained.

Top