On the synchronizability and detectability of random PPM sequences
NASA Technical Reports Server (NTRS)
Georghiades, Costas N.; Lin, Shu
1987-01-01
The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum-likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds derived on the symbol error probability as well as the probability of false synchronization indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.
On the synchronizability and detectability of random PPM sequences
NASA Technical Reports Server (NTRS)
Georghiades, Costas N.
1987-01-01
The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds were derived on the symbol error probability as well as the probability of false synchronization that indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Davidson, Frederic; Field, Christopher
1990-01-01
A 50 Mbps direct detection optical communication system for use in an intersatellite link was constructed with an AlGaAs laser diode transmitter and a silicon avalanche photodiode photodetector. The system used a Q = 4 PPM format. The receiver consisted of a maximum likelihood PPM detector and a timing recovery subsystem. The PPM slot clock was recovered at the receiver by using a transition detector followed by a PLL. The PPM word clock was recovered by using a second PLL whose input was derived from the presence of back-to-back PPM pulses contained in the received random PPM pulse sequences. The system achieved a bit error rate of 0.000001 at less than 50 detected signal photons/information bit. The receiver was capable of acquiring and maintaining slot and word synchronization for received signal levels greater than 20 photons/information bit, at which the receiver bit error rate was about 0.01.
Consistent evaluation of GOSAT, SCIAMACHY, carbontracker, and MACC through comparisons to TCCON
Kulawik, S. S.; Wunch, D.; O'Dell, C.; ...
2015-06-22
Consistent validation of satellite CO 2 estimates is a prerequisite for using multiple satellite CO 2 measurements for joint flux inversion, and for establishing an accurate long-term atmospheric CO 2 data record. We focus on validating model and satellite observation attributes that impact flux estimates and CO 2 assimilation, including accurate error estimates, correlated and random errors, overall biases, biases by season and latitude, the impact of coincidence criteria, validation of seasonal cycle phase and amplitude, yearly growth, and daily variability. We evaluate dry air mole fraction (X CO 2) for GOSAT (ACOS b3.5) and SCIAMACHY (BESD v2.00.08) as wellmore » as the CarbonTracker (CT2013b) simulated CO 2 mole fraction fields and the MACC CO 2 inversion system (v13.1) and compare these to TCCON observations (GGG2014). We find standard deviations of 0.9 ppm, 0.9, 1.7, and 2.1 ppm versus TCCON for CT2013b, MACC, GOSAT, and SCIAMACHY, respectively, with the single target errors 1.9 and 0.9 times the predicted errors for GOSAT and SCIAMACHY, respectively. When satellite data are averaged and interpreted according to error 2 = a 2+ b 2 / n (where n are the number of observations averaged, a are the systematic (correlated) errors, and b are the random (uncorrelated) errors), we find that the correlated error term a = 0.6 ppm and the uncorrelated error term b = 1.7 ppm for GOSAT and a = 1.0 ppm, b = 1.4 ppm for SCIAMACHY regional averages. Biases at individual stations have year-to-year variability of ~ 0.3 ppm, with biases larger than the TCCON predicted bias uncertainty of 0.4 ppm at many stations. Using fitting software, we find that GOSAT underpredicts the seasonal cycle amplitude in the Northern Hemisphere (NH) between 46–53° N. In the Southern Hemisphere (SH), CT2013b underestimates the seasonal cycle amplitude. Biases are calculated for 3-month intervals and indicate the months that contribute to the observed amplitude differences. The seasonal cycle phase indicates whether a dataset or model lags another dataset in time. We calculate this at a subset of stations where there is adequate satellite data, and find that the GOSAT retrieved phase improves substantially over the prior and the SCIAMACHY retrieved phase improves substantially for 2 of 7 sites. The models reproduce the measured seasonal cycle phase well except for at Lauder125 (CT2013b), Darwin (MACC), and Izana (+ 10 days, CT2013b), as for Bremen and Four Corners, which are highly influenced by local effects. We compare the variability within one day between TCCON and models in JJA; there is correlation between 0.2 and 0.8 in the NH, with models showing 10–100 % the variability of TCCON at different stations (except Bremen and Four Corners which have no variability compared to TCCON) and CT2013b showing more variability than MACC. This paper highlights findings that provide inputs to estimate flux errors in model assimilations, and places where models and satellites need further investigation, e.g. the SH for models and 45–67° N for GOSAT« less
Pulsed Airborne Lidar Measurements of C02 Column Absorption
NASA Technical Reports Server (NTRS)
Abshire, James B.; Riris, Haris; Allan, Graham R.; Weaver, Clark J.; Mao, Jianping; Sun, Xiaoli; Hasselbrack, William E.; Rodriquez, Michael; Browell, Edward V.
2011-01-01
We report on airborne lidar measurements of atmospheric CO2 column density for an approach being developed as a candidate for NASA's ASCENDS mission. It uses a pulsed dual-wavelength lidar measurement based on the integrated path differential absorption (IPDA) technique. We demonstrated the approach using the CO2 measurement from aircraft in July and August 2009 over four locations. The results show clear CO2 line shape and absorption signals, which follow the expected changes with aircraft altitude from 3 to 13 km. The 2009 measurements have been analyzed in detail and the results show approx.1 ppm random errors for 8-10 km altitudes and approx.30 sec averaging times. Airborne measurements were also made in 2010 with stronger signals and initial analysis shows approx. 0.3 ppm random errors for 80 sec averaging times for measurements at altitudes> 6 km.
Mitigating Photon Jitter in Optical PPM Communication
NASA Technical Reports Server (NTRS)
Moision, Bruce
2008-01-01
A theoretical analysis of photon-arrival jitter in an optical pulse-position-modulation (PPM) communication channel has been performed, and now constitutes the basis of a methodology for designing receivers to compensate so that errors attributable to photon-arrival jitter would be minimized or nearly minimized. Photon-arrival jitter is an uncertainty in the estimated time of arrival of a photon relative to the boundaries of a PPM time slot. Photon-arrival jitter is attributable to two main causes: (1) receiver synchronization error [error in the receiver operation of partitioning time into PPM slots] and (2) random delay between the time of arrival of a photon at a detector and the generation, by the detector circuitry, of a pulse in response to the photon. For channels with sufficiently long time slots, photon-arrival jitter is negligible. However, as durations of PPM time slots are reduced in efforts to increase throughputs of optical PPM communication channels, photon-arrival jitter becomes a significant source of error, leading to significant degradation of performance if not taken into account in design. For the purpose of the analysis, a receiver was assumed to operate in a photon- starved regime, in which photon counts follow a Poisson distribution. The analysis included derivation of exact equations for symbol likelihoods in the presence of photon-arrival jitter. These equations describe what is well known in the art as a matched filter for a channel containing Gaussian noise. These equations would yield an optimum receiver if they could be implemented in practice. Because the exact equations may be too complex to implement in practice, approximations that would yield suboptimal receivers were also derived.
Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph
2008-09-25
Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.
Word and frame synchronization with verification for PPM optical communications
NASA Technical Reports Server (NTRS)
Marshall, William K.
1986-01-01
A method for obtaining word and frame synchronization in pulse position modulated optical communication systems is described. The method uses a short sync sequence inserted at the beginning of each data frame and a verification procedure to distinguish between inserted and randomly occurring sequences at the receiver. This results in an easy to implement sync system which provides reliable synchronization even at high symbol error rates. Results are given for the application of this approach to a highly energy efficient 256-ary PPM test system.
NASA Astrophysics Data System (ADS)
Singh, Upendra N.; Refaat, Tamer F.; Ismail, Syed; Petros, Mulugeta; Davis, Kenneth J.; Kawa, Stephan R.; Menzies, Robert T.
2018-04-01
Modeling of a space-based high-energy 2-μm triple-pulse Integrated Path Differential Absorption (IPDA) lidar was conducted to demonstrate carbon dioxide (CO2) measurement capability and to evaluate random and systematic errors. A high pulse energy laser and an advanced MCT e-APD detector were incorporated in this model. Projected performance shows 0.5 ppm precision and 0.3 ppm bias in low-tropospheric column CO2 mixing ratio measurements from space for 10 second signal averaging over Railroad Valley (RRV) reference surface.
NASA Astrophysics Data System (ADS)
Hensley, Winston; Giovanetti, Kevin
2008-10-01
A 1 ppm precision measurement of the muon lifetime is being conducted by the MULAN collaboration. The reason for this new measurement lies in recent advances in theory that have reduced the uncertainty in calculating the Fermi Coupling Constant from the measured lifetime to a few tenths ppm. The largest uncertainty is now experimental. To achieve a 1ppm level of precision it is necessary to control all sources of systematic error and to understand their influences on the lifetime measurement. James Madison University is contributing by examine the response of the timing system to uncorrelated events, randoms. A radioactive source was placed in front of paired detectors similar to those in the main experiment. These detectors were integrated in an identical fashion into the data acquisition and measurement system and data from these detectors was recorded during the entire experiment. The pair were placed in a shielded enclosure away from the main experiment to minimize interference. The data from these detectors should have a flat time spectrum as the decay of a radioactive source is a random event and has no time correlation. Thus the spectrum can be used as an important diagnostic in studying the method of determining event times and timing system performance.
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Davidson, Frederic M.
1990-01-01
A technique for word timing recovery in a direct-detection optical PPM communication system is described. It tracks on back-to-back pulse pairs in the received random PPM data sequences with the use of a phase locked loop. The experimental system consisted of an 833-nm AlGaAs laser diode transmitter and a silicon avalanche photodiode photodetector, and it used Q = 4 PPM signaling at source data rate 25 Mb/s. The mathematical model developed to describe system performance is shown to be in good agreement with the experimental measurements. Use of this recovered PPM word clock with a slot clock recovery system caused no measurable penalty in receiver sensitivity. The completely self-synchronized receiver was capable of acquiring and maintaining both slot and word synchronizations for input optical signal levels as low as 20 average detected photons per information bit. The receiver achieved a bit error probability of 10 to the -6th at less than 60 average detected photons per information bit.
NASA Technical Reports Server (NTRS)
Chen, Chien-Chung; Gardner, Chester S.
1989-01-01
Given the rms transmitter pointing error and the desired probability of bit error (PBE), it can be shown that an optimal transmitter antenna gain exists which minimizes the required transmitter power. Given the rms local oscillator tracking error, an optimum receiver antenna gain can be found which optimizes the receiver performance. The impact of pointing and tracking errors on the design of direct-detection pulse-position modulation (PPM) and heterodyne noncoherent frequency-shift keying (NCFSK) systems are then analyzed in terms of constraints on the antenna size and the power penalty incurred. It is shown that in the limit of large spatial tracking errors, the advantage in receiver sensitivity for the heterodyne system is quickly offset by the smaller antenna gain and the higher power penalty due to tracking errors. In contrast, for systems with small spatial tracking errors, the heterodyne system is superior because of the higher receiver sensitivity.
NASA Technical Reports Server (NTRS)
Natarajan, Suresh; Gardner, C. S.
1987-01-01
Receiver timing synchronization of an optical Pulse-Position Modulation (PPM) communication system can be achieved using a phased-locked loop (PLL), provided the photodetector output is suitably processed. The magnitude of the PLL phase error is a good indicator of the timing error at the receiver decoder. The statistics of the phase error are investigated while varying several key system parameters such as PPM order, signal and background strengths, and PPL bandwidth. A practical optical communication system utilizing a laser diode transmitter and an avalanche photodiode in the receiver is described, and the sampled phase error data are presented. A linear regression analysis is applied to the data to obtain estimates of the relational constants involving the phase error variance and incident signal power.
Error Rates and Channel Capacities in Multipulse PPM
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Moision, Bruce
2007-01-01
A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
Error-Rate Bounds for Coded PPM on a Poisson Channel
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2009-01-01
Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.
Free space optical ultra-wideband communications over atmospheric turbulence channels.
Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu
2010-08-02
A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Skillman, David R.; Hoffman, Evan D.; Mao, Dandan; McGarry, Jan F.; Neumann, Gregory A.; McIntire, Leva; Zellar, Ronald S.; Davidson, Frederic M.; Fong, Wai H.;
2013-01-01
We report a free space laser communication experiment from the satellite laser ranging (SLR) station at NASA Goddard Space Flight Center (GSFC) to the Lunar Reconnaissance Orbiter (LRO) in lunar orbit through the on board one-way Laser Ranging (LR) receiver. Pseudo random data and sample image files were transmitted to LRO using a 4096-ary pulse position modulation (PPM) signal format. Reed-Solomon forward error correction codes were used to achieve error free data transmission at a moderate coding overhead rate. The signal fading due to the atmosphere effect was measured and the coding gain could be estimated.
The Orbiting Carbon Observatory Mission: Watching the Earth Breathe Mapping CO2 from Space
NASA Technical Reports Server (NTRS)
Boain, Ron
2007-01-01
Approach: Collect spatially resolved, high resolution spectroscopic observations of CO2 and O2 absorption in reflected sunlight. Use these data to resolve spatial and temporal variations in the column averaged CO2 dry air mole fraction, X(sub CO2) over the sunlit hemisphere. Employ independent calibration and validation approaches to produce X(sub CO2) estimates with random errors and biases no larger than 1-2 ppm (0.3-0.5%) on regional scales at monthly intervals.
Kreitzer, J.F.
1980-01-01
Adult male bobwhite quail Colinus virginianus were fed toxaphene (chlorinated camphene, 67?69% chlorine) at 10 and 50 ppm or endrin (1,2,3,4,10,10-hexachloro-6,7-epoxy-1,4,4a,5,6,7,8,8a-octahydro-1,4-endo-endo,5,8-dimethanonaphalene) at 0?1 and 1?0 ppm and their performance on non-spatial discrimination reversal tasks was measured. The birds were on dosage for 138 days (beginning at the age of 3 days) prior to testing. Two tests (with different pairs of patterns) were conducted with toxaphene-treated birds and five with endrin-treated birds. The toxaphene-treated birds made 50% more errors than did controls (p < 0?02). There was no difference between the effects of the two treatment levels. The performance of the treated birds on a second test equalled that of the controls, indicating that the birds were able to adjust to the pesticide whilst on treatment. Endrin-treated birds made from 36 to 139% more errors than did controls (p < 0?025). The difference between the number of errors made by the controls and the number made by the treated birds on the acquisition, or initial problem of each test, increased exponentially over the first four tests. The 0?1 ppm birds made significantly more errors than the 1?0 ppm birds after reversal 3 or 4 in the first three tests. The endrin effects were reversed after 50 days of untreated feed. The principal effect of endrin was to impair the birds' ability to solve a novel problem. The effects of toxaphene in birds treated as adults appeared after about 30 days of treatment and those of endrin after about 40 days of treatment. Mean brain residues in endrin-treated birds were 0?075 ppm (wet weight basis) for the 0?1 ppm level birds and 0?35 ppm for the 1?0 ppm level birds.
1991-07-01
predicted by equation using actual chart response obtained from each calibration gas response. (Concentration of cal. gas,l Calibration error, % span • ppm...Analyzer predicted by cali- Col. gas Chart divisions equation* bration Cylinder conc., error,** Drift,***INo. ppm or % Pretest Posttest Pretest Posttest...2m ~J * Correlation coef. * qgq’jq **Analyzer ca.error, % spn (Cal. gas conc. conc. predicted ) x 1003 cal spanSpan value Acceptable limit x ɚ% of
Validation of YCAR algorithm over East Asia TCCON sites
NASA Astrophysics Data System (ADS)
Kim, W.; Kim, J.; Jung, Y.; Lee, H.; Goo, T. Y.; Cho, C. H.; Lee, S.
2016-12-01
In order to reduce the retrieval error of TANSO-FTS column averaged CO2 concentration (XCO2) induced by aerosol, we develop the Yonsei university CArbon Retrieval (YCAR) algorithm using aerosol information from TANSO-Cloud and Aerosol Imager (TANSO-CAI), providing simultaneous aerosol optical depth properties for the same geometry and optical path along with the FTS. Also we validate the retrieved results using ground-based TCCON measurement. Particularly this study first utilized the measurements at Anmyeondo, the only TCCON site located in South Korea, which can improve the quality of validation in East Asia. After the post screening process, YCAR algorithms have higher data availability by 33 - 85 % than other operational algorithms (NIES, ACOS, UoL). Although the YCAR algorithm has higher data availability, regression analysis with TCCON measurements are better or similar to other algorithms; Regression line of YCAR algorithm is close to linear identity function with RMSE of 2.05, bias of - 0.86 ppm. According to error analysis, retrieval error of YCAR algorithm is 1.394 - 1.478 ppm at East Asia. In addition, spatio-temporal sampling error of 0.324 - 0.358 ppm for each single sounding retrieval is also analyzed with Carbon Tracker - Asia data. These results of error analysis reveal the reliability and accuracy of latest version of our YCAR algorithm. Both XCO2 values retrieved using YCAR algorithm on TANSO-FTS and TCCON measurements show the consistent increasing trend about 2.3 - 2.6 ppm per year. Comparing to the increasing rate of global background CO2 amount measured in Mauna Loa, Hawaii (2 ppm per year), the increasing trend in East Asia shows about 30% higher trend due to the rapid increase of CO2 emission from the source region.
NASA Technical Reports Server (NTRS)
Li, Jing; Hylton, Alan; Budinger, James; Nappier, Jennifer; Downey, Joseph; Raible, Daniel
2012-01-01
Due to its simplicity and robustness against wavefront distortion, pulse position modulation (PPM) with photon counting detector has been seriously considered for long-haul optical wireless systems. This paper evaluates the dual-pulse case and compares it with the conventional single-pulse case. Analytical expressions for symbol error rate and bit error rate are first derived and numerically evaluated, for the strong, negative-exponential turbulent atmosphere; and bandwidth efficiency and throughput are subsequently assessed. It is shown that, under a set of practical constraints including pulse width and pulse repetition frequency (PRF), dual-pulse PPM enables a better channel utilization and hence a higher throughput than it single-pulse counterpart. This result is new and different from the previous idealistic studies that showed multi-pulse PPM provided no essential information-theoretic gains than single-pulse PPM.
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
XCO2 Retrieval Errors from a PCA-based Approach to Fast Radiative Transfer
NASA Astrophysics Data System (ADS)
Somkuti, Peter; Boesch, Hartmut; Natraj, Vijay; Kopparla, Pushkar
2017-04-01
Multiple-scattering radiative transfer (RT) calculations are an integral part of forward models used to infer greenhouse gas concentrations in the shortwave-infrared spectral range from satellite missions such as GOSAT or OCO-2. Such calculations are, however, computationally expensive and, combined with the recent growth in data volume, necessitate the use of acceleration methods in order to make retrievals feasible on an operational level. The principle component analysis (PCA)-based approach to fast radiative transfer introduced by Natraj et al. 2005 is a spectral binning method, in which the many line-by-line monochromatic calculations are replaced by a small set of representative ones. From the PCA performed on the optical layer properties for a scene-dependent atmosphere, the results of the representative calculations are mapped onto all spectral points in the given band. Since this RT scheme is an approximation, the computed top-of-atmosphere radiances exhibit errors compared to the "full" line-by-line calculation. These errors ultimately propagate into the final retrieved greenhouse gas concentrations, and their magnitude depends on scene-dependent parameters such as aerosol loadings or viewing geometry. An advantage of this method is the ability to choose the degree of accuracy by increasing or decreasing the number of empirical orthogonal functions used for the reconstruction of the radiances. We have performed a large set of global simulations based on real GOSAT scenes and assess the retrieval errors induced by the fast RT approximation through linear error analysis. We find that across a wide range of geophysical parameters, the errors are for the most part smaller than ± 0.2 ppm and ± 0.06 ppm (out of roughly 400 ppm) for ocean and land scenes respectively. A fast RT scheme that produces low errors is important, since regional biases in XCO2 even in the low sub-ppm range can cause significant changes in carbon fluxes obtained from inversions (Chevallier et al. 2007).
A comparison of OCO-2 XCO2 Observations to GOSAT and Models
NASA Astrophysics Data System (ADS)
O'Dell, C.; Eldering, A.; Crisp, D.; Gunson, M. R.; Fisher, B.; Mandrake, L.; McDuffie, J. L.; Baker, D. F.; Wennberg, P. O.
2016-12-01
With their high spatial resolution and dense sampling density, observations of atmospheric carbon dioxide (CO2) from space-based sensors such as the Orbiting Carbon Observatory-2 (OCO-2) have the potential to revolutionize our understanding of carbon sources and sinks. To achieve this goal, however, requires the observations to have sub-ppm systematic errors; the large data density of OCO-2 generally reduces the importance of random errors in the retrieval of of regional scale fluxes. In this work, the Atmospheric Carbon Observations from Space (ACOS) algorithm has been applied to both OCO-2 and GOSAT observations, which overlap for the period spanning Sept 2014 to present (2+ years). Previous activities utilizing TCCON and aircraft data have shown the ACOS/GOSAT B3.5 product to be quite accurate (1-2 ppm) over both land and ocean. In this work, we apply nearly identical versions of the ACOS retrieval algorithm to both OCO-2 and GOSAT to enable comparisons during the period of overlap, and to minimize algorithm-induced differences. GOSAT/OCO-2 comparisons are used to explore potential biases in the OCO-2 data, and to better understand the nature of the bias correction required for each product. Finally, each product is compared to an ensemble of models in order to evaluate their relative consistency, a critical activity before both can be used simultaneously in carbon flux inversions with confidence.
Synchronization using pulsed edge tracking in optical PPM communication system
NASA Technical Reports Server (NTRS)
Gagliardi, R.
1972-01-01
A pulse position modulated (PPM) optical communication system using narrow pulses of light for data transmission requires accurate time synchronization between transmitter and receiver. The presence of signal energy in the form of optical pulses suggests the use of a pulse edge tracking method of maintaining the necessary timing. The edge tracking operation in a binary PPM system is examined, taking into account the quantum nature of the optical transmissions. Consideration is given first to pure synchronization using a periodic pulsed intensity, then extended to the case where position modulation is present and auxiliary bit decisioning is needed to aid the tracking operation. Performance analysis is made in terms of timing error and its associated statistics. Timing error variances are shown as a function of system signal to noise ratio.
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Skillman, David R.; Hoffman, Evan D.; Mao, Dandan; McGarry, Jan F.; Zellar, Ronald S.; Fong, Wai H; Krainak, Michael A.; Neumann, Gregory A.; Smith, David E.
2013-01-01
Laser communication and ranging experiments were successfully conducted from the satellite laser ranging (SLR) station at NASA Goddard Space Flight Center (GSFC) to the Lunar Reconnaissance Orbiter (LRO) in lunar orbit. The experiments used 4096-ary pulse position modulation (PPM) for the laser pulses during one-way LRO Laser Ranging (LR) operations. Reed-Solomon forward error correction codes were used to correct the PPM symbol errors due to atmosphere turbulence and pointing jitter. The signal fading was measured and the results were compared to the model.
Parallel Processing of Broad-Band PPM Signals
NASA Technical Reports Server (NTRS)
Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement
2010-01-01
A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).
Exploring the initial steps of the testing process: frequency and nature of pre-preanalytic errors.
Carraro, Paolo; Zago, Tatiana; Plebani, Mario
2012-03-01
Few data are available on the nature of errors in the so-called pre-preanalytic phase, the initial steps of the testing process. We therefore sought to evaluate pre-preanalytic errors using a study design that enabled us to observe the initial procedures performed in the ward, from the physician's test request to the delivery of specimens in the clinical laboratory. After a 1-week direct observational phase designed to identify the operating procedures followed in 3 clinical wards, we recorded all nonconformities and errors occurring over a 6-month period. Overall, the study considered 8547 test requests, for which 15 917 blood sample tubes were collected and 52 982 tests undertaken. No significant differences in error rates were found between the observational phase and the overall study period, but underfilling of coagulation tubes was found to occur more frequently in the direct observational phase (P = 0.043). In the overall study period, the frequency of errors was found to be particularly high regarding order transmission [29 916 parts per million (ppm)] and hemolysed samples (2537 ppm). The frequency of patient misidentification was 352 ppm, and the most frequent nonconformities were test requests recorded in the diary without the patient's name and failure to check the patient's identity at the time of blood draw. The data collected in our study confirm the relative frequency of pre-preanalytic errors and underline the need to consensually prepare and adopt effective standard operating procedures in the initial steps of laboratory testing and to monitor compliance with these procedures over time.
Modulation and coding for throughput-efficient optical free-space links
NASA Technical Reports Server (NTRS)
Georghiades, Costas N.
1993-01-01
Optical direct-detection systems are currently being considered for some high-speed inter-satellite links, where data-rates of a few hundred megabits per second are evisioned under power and pulsewidth constraints. In this paper we investigate the capacity, cutoff-rate and error-probability performance of uncoded and trellis-coded systems for various modulation schemes and under various throughput and power constraints. Modulation schemes considered are on-off keying (OOK), pulse-position modulation (PPM), overlapping PPM (OPPM) and multi-pulse (combinatorial) PPM (MPPM).
Evaluation and attribution of OCO-2 XCO2 uncertainties
NASA Astrophysics Data System (ADS)
Worden, John R.; Doran, Gary; Kulawik, Susan; Eldering, Annmarie; Crisp, David; Frankenberg, Christian; O'Dell, Chris; Bowman, Kevin
2017-07-01
Evaluating and attributing uncertainties in total column atmospheric CO2 measurements (XCO2) from the OCO-2 instrument is critical for testing hypotheses related to the underlying processes controlling XCO2 and for developing quality flags needed to choose those measurements that are usable for carbon cycle science.Here we test the reported uncertainties of version 7 OCO-2 XCO2 measurements by examining variations of the XCO2 measurements and their calculated uncertainties within small regions (˜ 100 km × 10.5 km) in which natural CO2 variability is expected to be small relative to variations imparted by noise or interferences. Over 39 000 of these small neighborhoods
comprised of approximately 190 observations per neighborhood are used for this analysis. We find that a typical ocean measurement has a precision and accuracy of 0.35 and 0.24 ppm respectively for calculated precisions larger than ˜ 0.25 ppm. These values are approximately consistent with the calculated errors of 0.33 and 0.14 ppm for the noise and interference error, assuming that the accuracy is bounded by the calculated interference error. The actual precision for ocean data becomes worse as the signal-to-noise increases or the calculated precision decreases below 0.25 ppm for reasons that are not well understood. A typical land measurement, both nadir and glint, is found to have a precision and accuracy of approximately 0.75 and 0.65 ppm respectively as compared to the calculated precision and accuracy of approximately 0.36 and 0.2 ppm. The differences in accuracy between ocean and land suggests that the accuracy of XCO2 data is likely related to interferences such as aerosols or surface albedo as they vary less over ocean than land. The accuracy as derived here is also likely a lower bound as it does not account for possible systematic biases between the regions used in this analysis.
A new method research of monitoring low concentration NO and SO2 mixed gas
NASA Astrophysics Data System (ADS)
Bo, Peng; Gao, Chao; Guo, Yongcai; Chen, Fang
2018-01-01
In order to reduce the pollution of the environment, China has implemented a new ultra-low emission control regulations for polluting gas, requiring new coal-fired power plant emissions SO2 less than 30ppm, NO less than 75ppm, NO2 less than 50ppm, Monitoring low concentration of NO and SO2 mixed gases , DOAS technology facing new challenges, SO2 absorb significantly weaken at the original absorption peak, what more the SNR is very low, it is difficult to extract the characteristic signal, and thus cannot obtain its concentration. So it cannot separate the signal of NO from the mixed gas at the wavelength of 200 230nm through the law of spectral superposition, it cannot calculate the concentration of NO. The classical DOAS technology cannot meet the needs of monitoring. In this paper, we found another absorption spectrum segment of SO2, the SNR is 10 times higher than before, Will not be affected by NO, can calculate the concentration of SO2 accurately, A new method of segmentation and demagnetization separation technology of spectral signals is proposed, which achieves the monitoring the low concentration mixed gas accurately. This function cannot be achieved by the classical DOAS. Detection limit of this method is 0.1ppm per meter which is higher than before, The relative error below 5% when the concentration between 0 5ppm, the concentration of NO between 6 75ppm and SO2 between 6 30ppm the relative error below 1.5%, it has made a great breakthrough In the low concentration of NO and SO2 monitoring. It has great scientific significance and reference value for the development of coal-fired power plant emission control, atmospheric environmental monitoring and high-precision on-line instrumentation.
Mello, Vinicius M; Oliveira, Flavia C C; Fraga, William G; do Nascimento, Claudia J; Suarez, Paulo A Z
2008-11-01
Three different calibration curves based on (1)H-NMR spectroscopy (300 MHz) were used for quantifying the reaction yield during biodiesel synthesis by esterification of fatty acids mixtures and methanol. For this purpose, the integrated intensities of the hydrogens of the ester methoxy group (3.67 ppm) were correlated with the areas related to the various protons of the alkyl chain (olefinic hydrogens: 5.30-5.46 ppm; aliphatic: 2.67-2.78 ppm, 2.30 ppm, 1.96-2.12 ppm, 1.56-1.68 ppm, 1.22-1.42 ppm, 0.98 ppm, and 0.84-0.92 ppm). The first curve was obtained using the peaks relating the olefinic hydrogens, a second with the parafinic protons and the third curve using the integrated intensities of all the hydrogens. A total of 35 samples were examined: 25 samples to build the three different calibration curves and ten samples to serve as external validation samples. The results showed no statistical differences among the three methods, and all presented prediction errors less than 2.45% with a co-efficient of variation (CV) of 4.66%. 2008 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel
2017-04-01
The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical approaches through an Observing System Simulation Experiment (OSSE) on a global scale. By changing the size of the random and systematic errors in the OSSE, we can determine the corresponding spatial and temporal resolutions at which useful flux signals could be detected from the OCO-2 data.
NASA Astrophysics Data System (ADS)
Viswanath, Anjitha; Kumar Jain, Virander; Kar, Subrat
2017-12-01
We investigate the error performance of an earth-to-satellite free space optical uplink using transmitter spatial diversity in presence of turbulence and weather conditions, using gamma-gamma distribution and Beer-Lambert law, respectively, for on-off keying (OOK), M-ary pulse position modulation (M-PPM) and M-ary differential PPM (M-DPPM) schemes. Weather conditions such as moderate, light and thin fog cause additional degradation, while dense or thick fog and clouds may lead to link failure. The bit error rate reduces with increase in the number of transmitters for all the schemes. However, beyond a certain number of transmitters, the reduction becomes marginal. Diversity gain remains almost constant for various weather conditions but increases with increase in ground-level turbulence or zenith angle. Further, the number of transmitters required to improve the performance to a desired level is less for M-PPM scheme than M-DPPM and OOK schemes.
Phase locked loop synchronization for direct detection optical PPM communication systems
NASA Technical Reports Server (NTRS)
Chen, C. C.; Gardner, C. S.
1985-01-01
Receiver timing synchronization of an optical pulse position modulation (PPM) communication system can be achieved using a phase locked loop (PLL) if the photodetector output is properly processed. The synchronization performance is shown to improve with increasing signal power and decreasing loop bandwidth. Bit error rate (BER) of the PLL synchronized PPM system is analyzed and compared to that for the perfectly synchronized system. It is shown that the increase in signal power needed to compensate for the imperfect synchronization is small (less than 0.1 dB) for loop bandwidths less than 0.1% of the slot frequency.
Parkinson, Charles R; Siddiqi, Muhammad; Mason, Stephen; Lippert, Frank; Hara, Anderson T; Zero, Domenick T
2017-01-01
Calcium sodium phosphosilicate (CSPS) is a bioactive glass material that alleviates dentin hypersensitivity and is postulated to confer remineralization of caries lesions. This single-centre, randomized, single (investigator)-blind, placebo-controlled, crossover, in situ study explored whether the addition of 5% CSPS to a nonaqueous fluoride (F) such as sodium monofluorophosphate (SMFP)-containing dentifrice affects its cariostatic ability. Seventy-seven subjects wore 4 gauze-covered enamel specimens with preformed lesions (2 surface-softened and 2 subsurface) placed buccally on their mandibular bilateral dentures for up to 4 weeks. Subjects brushed twice daily with 1 of the 5 study dentifrices: 927 ppm F/5% CSPS, 927 ppm F/0% CSPS, 250 ppm F/0% CSPS, 0 ppm F/5% CSPS, or 0 ppm F/0% CSPS. Specimens were retrieved after either 21 (surface-softened lesions; analyzed by Knoop surface microhardness [SMH]) or 28 days (subsurface lesions; analyzed by transverse microradiography). The enamel fluoride uptake was determined for all specimens using a microbiopsy technique. The concentrations of fluoride and calcium in gauze-retrieved plaque were also evaluated. Higher dentifrice fluoride concentrations led to greater remineralization and fluoridation of both lesion types and increased plaque fluoride concentrations. CSPS did not improve the cariostatic properties of SMFP; there were no statistically significant differences between 927 ppm F/5% CSPS and 927 ppm F/0% CSPS in percent SMH recovery (p = 0.6788), change in integrated mineral loss (p = 0.5908), or lesion depth (p = 0.6622). Likewise, 0 ppm F/5% CSPS did not provide any benefits in comparison to 0 ppm F/0% CSPS. In conclusion, CSPS does not negatively impact nor does it improve the ability of an SMFP dentifrice to affect remineralization of caries lesions. © 2017 S. Karger AG, Basel.
Trace impurities analysis of aluminum nanopowder and its air combustion product
NASA Astrophysics Data System (ADS)
Kabanov, Denis V.; Merkulov, Viktor G.; Mostovshchikov, Andrey V.; Ilyin, Alexander P.
2018-03-01
Neutron activation analysis (NAA) allows estimating micro-concentrations of chemicals and analyzes tens of elements at one measurement. In this paper we have used NAA to examine metal impurities in the electroexplosive aluminum nanopowder (ANP) and its air-combustion products produced by burning in crucibles in an electric and magnetic field and without application of fields. It has been revealed that in the air-combustion products impurities content is reduced. The presence of impurities in the ANP is associated with electric explosion technology (erosion of electrode and chamber materials) and with the previous development of various nanopowders in the composition of this electric explosive device. NAA is characterized by a high sensitivity and reproducibility to elements content and low metering error. According to the obtained results it has been concluded that NAA metering error does not exceed 10% in the wide concentration range, from 0.01 to 2100 ppm, particularly. Besides, there is high reproducibility of the method that has been proved on macro-elements of Ca (>1000 ppm), Fe (>2000 ppm), and micro-elements as Sm, U, Ce, Sb, Th, etc. (<0.9 ppm). It is recommended to use an individual unit for the production of pure metal powders for electric explosion and production of nanopowders, which is possible with mass production of nanopowders.
NASA Technical Reports Server (NTRS)
Spence, Rodney L.
1993-01-01
The important principles of direct- and heterodyne-detection optical free-space communications are reviewed. Signal-to-noise-ratio (SNR) and bit-error-rate (BER) expressions are derived for both the direct-detection and heterodyne-detection optical receivers. For the heterodyne system, performance degradation resulting from received-signal and local oscillator-beam misalignment and laser phase noise is analyzed. Determination of interfering background power from local and extended background sources is discussed. The BER performance of direct- and heterodyne-detection optical links in the presence of Rayleigh-distributed random pointing and tracking errors is described. Finally, several optical systems employing Nd:YAG, GaAs, and CO2 laser sources are evaluated and compared to assess their feasibility in providing high-data-rate (10- to 1000-Mbps) Mars-to-Earth communications. It is shown that the root mean square (rms) pointing and tracking accuracy is a critical factor in defining the system transmitting laser-power requirements and telescope size and that, for a given rms error, there is an optimum telescope aperture size that minimizes the required power. The results of the analysis conducted indicate that, barring the achievement of extremely small rms pointing and tracking errors (less than 0.2 microrad), the two most promising types of optical systems are those that use an Nd:YAG laser (lambda = 1.064 microns) and high-order pulse position modulator (PPM) and direct detection, and those that use a CO2 laser (lambda = 10.6 microns) and phase shifting keying homodyne modulation and coherent detection. For example, for a PPM order of M = 64 and an rms pointing accuracy of 0.4 microrad, an Nd:YAG system can be used to implement a 100-Mbps Mars link with a 40-cm transmitting telescope, a 20-W laser, and a 10-m receiving photon bucket. Under the same conditions, a CO2 system would require 3-m transmitting and receiving telescopes and a 32-W laser to implement such a link. Other types of optical systems, such as a semiconductor laser systems, are impractical in the presence of large rms pointing errors because of the high power requirements of the 100-Mbps Mars link, even when optimal-size telescopes are used.
Sanchez-Rodriguez, Estefania; Lima-Cabello, Elena; Biel-Glesson, Sara; Fernandez-Navarro, Jose R.; Calleja, Miguel A.; Roca, Maria; Espejo-Calvo, Juan A.; Gil-Extremera, Blas; de la Torre, Rafael; Fito, Montserrat; Covas, Maria-Isabel; Alche, Juan de Dios; Martinez de Victoria, Emilio; Mesa, Maria D.
2018-01-01
The aim of this study was to evaluate the effect of virgin olive oils (VOOs) enriched with phenolic compounds and triterpenes on metabolic syndrome and endothelial function biomarkers in healthy adults. The trial was a three-week randomized, crossover, controlled, double-blind, intervention study involving 58 subjects supplemented with a daily dose (30 mL) of three oils: (1) a VOO (124 ppm of phenolic compounds and 86 ppm of triterpenes); (2) an optimized VOO (OVOO) (490 ppm of phenolic compounds and 86 ppm of triterpenes); and (3) a functional olive oil (FOO) high in phenolic compounds (487 ppm) and enriched with triterpenes (389 ppm). Metabolic syndrome and endothelial function biomarkers were determined in vivo and ex vivo. Plasma high density lipoprotein cholesterol (HDLc) increased after the OVOO intake. Plasma endothelin-1 levels decreased after the intake of the three olive oils, and in blood cell cultures challenged. Daily intake of VOO enriched in phenolic compounds improved plasma HDLc, although no differences were found at the end of the three interventions, while VOO with at least 124 ppm of phenolic compounds, regardless of the triterpenes content improved the systemic endothelin-1 levels in vivo and ex vivo. No effect of triterpenes was observed after three weeks of interventions. Results need to be confirmed in subjects with metabolic syndrome and impaired endothelial function (Clinical Trials number NCT02520739). PMID:29772657
Sanchez-Rodriguez, Estefania; Lima-Cabello, Elena; Biel-Glesson, Sara; Fernandez-Navarro, Jose R; Calleja, Miguel A; Roca, Maria; Espejo-Calvo, Juan A; Gil-Extremera, Blas; Soria-Florido, Maria; de la Torre, Rafael; Fito, Montserrat; Covas, Maria-Isabel; Alche, Juan de Dios; Martinez de Victoria, Emilio; Gil, Angel; Mesa, Maria D
2018-05-16
The aim of this study was to evaluate the effect of virgin olive oils (VOOs) enriched with phenolic compounds and triterpenes on metabolic syndrome and endothelial function biomarkers in healthy adults. The trial was a three-week randomized, crossover, controlled, double-blind, intervention study involving 58 subjects supplemented with a daily dose (30 mL) of three oils: (1) a VOO (124 ppm of phenolic compounds and 86 ppm of triterpenes); (2) an optimized VOO (OVOO) (490 ppm of phenolic compounds and 86 ppm of triterpenes); and (3) a functional olive oil (FOO) high in phenolic compounds (487 ppm) and enriched with triterpenes (389 ppm). Metabolic syndrome and endothelial function biomarkers were determined in vivo and ex vivo. Plasma high density lipoprotein cholesterol (HDLc) increased after the OVOO intake. Plasma endothelin-1 levels decreased after the intake of the three olive oils, and in blood cell cultures challenged. Daily intake of VOO enriched in phenolic compounds improved plasma HDLc, although no differences were found at the end of the three interventions, while VOO with at least 124 ppm of phenolic compounds, regardless of the triterpenes content improved the systemic endothelin-1 levels in vivo and ex vivo. No effect of triterpenes was observed after three weeks of interventions. Results need to be confirmed in subjects with metabolic syndrome and impaired endothelial function (Clinical Trials number NCT02520739).
Sakcali, M Serdal; Kekec, Guzin; Uzonur, Irem; Alpsoy, Lokman; Tombuloglu, Huseyin
2015-08-01
This study was carried out to investigate the genotoxic effect of boron (B) on maize using randomly amplified polymorphic DNA (RAPD) method. Experimental design was conducted under 0, 5, 10, 25, 50, 100, 125, and 150 ppm B exposures, and physiological changes have revealed a sharp decrease in root growth rates from 28% to 85%, starting from 25 ppm to 150 ppm, respectively. RAPD-polymerase chain reaction (PCR) analysis shows that DNA alterations are clearly observed from beginning to 100 ppm. B-induced inhibition in root growth had a positive correlation with DNA alterations. Total soluble protein, root and stem lengths, and B content analysis in root and leaves encourage these results as a consequence. These preliminary findings reveal that B causes chromosomal aberration and genotoxic effects on maize. Meanwhile, usage of RAPD-PCR technique is a suitable biomarker to detect genotoxic effect of B on maize and other crops for the future. © The Author(s) 2013.
SPECTROPHOTOMETRIC DETERMINATION OF TRACES OF BORON IN THORIUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onishi, H.; Ishiwatari, N.; Nagai, H.
1960-12-01
A procedure is described for the spectrophotometric determination of a few tenths of a pant per million of boron ia thorium oxide or thorium. The sample is dissolved in strong phosphoric acid. After diluting the solution with water, boron is separated by distillation as methyl borate and finally determined by the curcumin method. The error is not likely to exceed plus or minus O.l ppm for 0.2 to 1 ppm of boron. (auth)
Lower-tropospheric CO 2 from near-infrared ACOS-GOSAT observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulawik, Susan S.; O'Dell, Chris; Payne, Vivienne H.
We present two new products from near-infrared Greenhouse Gases Observing Satellite (GOSAT) observations: lowermost tropospheric (LMT, from 0 to 2.5 km) and upper tropospheric–stratospheric ( U, above 2.5 km) carbon dioxide partial column mixing ratios. We compare these new products to aircraft profiles and remote surface flask measurements and find that the seasonal and year-to-year variations in the new partial column mixing ratios significantly improve upon the Atmospheric CO 2 Observations from Space (ACOS) and GOSAT (ACOS-GOSAT) initial guess and/or a priori, with distinct patterns in the LMT and U seasonal cycles that match validation data. For land monthly averages,more » we find errors of 1.9, 0.7, and 0.8 ppm for retrieved GOSAT LMT, U, and XCO 2; for ocean monthly averages, we find errors of 0.7, 0.5, and 0.5 ppm for retrieved GOSAT LMT, U, and XCO 2. In the southern hemispheric biomass burning season, the new partial columns show similar patterns to MODIS fire maps and MOPITT multispectral CO for both vertical levels, despite a flat ACOS-GOSAT prior, and a CO–CO 2 emission factor comparable to published values. The difference of LMT and U, useful for evaluation of model transport error, has also been validated with a monthly average error of 0.8 (1.4) ppm for ocean (land). LMT is more locally influenced than U, meaning that local fluxes can now be better separated from CO 2 transported from far away.« less
Lower-tropospheric CO 2 from near-infrared ACOS-GOSAT observations
Kulawik, Susan S.; O'Dell, Chris; Payne, Vivienne H.; ...
2017-04-27
We present two new products from near-infrared Greenhouse Gases Observing Satellite (GOSAT) observations: lowermost tropospheric (LMT, from 0 to 2.5 km) and upper tropospheric–stratospheric ( U, above 2.5 km) carbon dioxide partial column mixing ratios. We compare these new products to aircraft profiles and remote surface flask measurements and find that the seasonal and year-to-year variations in the new partial column mixing ratios significantly improve upon the Atmospheric CO 2 Observations from Space (ACOS) and GOSAT (ACOS-GOSAT) initial guess and/or a priori, with distinct patterns in the LMT and U seasonal cycles that match validation data. For land monthly averages,more » we find errors of 1.9, 0.7, and 0.8 ppm for retrieved GOSAT LMT, U, and XCO 2; for ocean monthly averages, we find errors of 0.7, 0.5, and 0.5 ppm for retrieved GOSAT LMT, U, and XCO 2. In the southern hemispheric biomass burning season, the new partial columns show similar patterns to MODIS fire maps and MOPITT multispectral CO for both vertical levels, despite a flat ACOS-GOSAT prior, and a CO–CO 2 emission factor comparable to published values. The difference of LMT and U, useful for evaluation of model transport error, has also been validated with a monthly average error of 0.8 (1.4) ppm for ocean (land). LMT is more locally influenced than U, meaning that local fluxes can now be better separated from CO 2 transported from far away.« less
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Davidson, Frederic M.
1990-01-01
A newly developed 220 Mbps free-space 4-ary pulse position modulation (PPM) direct detection optical communication system is described. High speed GaAs integrated circuits were used to construct the PPM encoder and receiver electronic circuits. Both PPM slot and word timing recovery were provided in the PPM receiver. The optical transmitter consisted of an AlGaAs laser diode (Mitsubishi ML5702A, lambda=821nm) and a high speed driver unit. The photodetector consisted of a silicon avalanche photodiode (APD) (RCA30902S) preceded by an optical interference filter (delta lambda=10nm). Preliminary tests showed that the self-synchronized PPM receiver could achieve a receiver bit error rate of less than 10(exp -6) at 25 nW average received optical signal power or 360 photons per transmitted information bit. The relatively poor receiver sensitivity was believed to be caused by the insufficient electronic bandwidth of the APD preamplifier and the poor linearity of the preamplifier high frequency response.
Mason, Stephen; Karwal, Ritu; Bosma, Mary Lynn
2017-09-01
This study evaluated and compared plaque removal efficacy of commercially available dentifrices containing sodium bicarbonate (NaHCO3) to those without NaHCO3 in a single timed brushing clinical study model. Two randomized, examiner-blind, three-period, three-treatment, crossover studies were performed in adults with a mean Turesky modification of the Quigley-Hein Plaque Index (TPI) score of = 2.00. In Study 1, 60 subjects were randomized to commercially available dentifrices containing: (i) 67% NaHCO3 plus 1425 ppm fluoride (F) as sodium fluoride (NaF); (ii) 45% NaHCO3 plus 1425 ppm F as NaF; or (iii) 0% NaHCO3 plus silica and 1450 ppm F as NaF. In Study 2, 55 subjects were randomized to commercially available dentifrices containing: (i) 67% NaHCO3 plus 1425 ppm F as NaF; (ii) 0% NaHCO3 plus silica and 1400 ppm F as amine F/stannous F; or (iii) 0% NaHCO3 plus chlorhexidine/aluminum lactate and silica with 1360 ppm F as aluminum F. In both studies, subjects brushed their teeth for one timed minute under supervised conditions. Plaque was assessed pre- and post-brushing according to a six-site modification of the TPI. Mean TPI score was analyzed using an analysis of covariance model with treatment and study period as fixed effects, subject as a random variable, and pre-brushing score as a covariate. In both studies, mean TPI score decreased in all groups post-brushing compared with pre-brushing. In Study 1, statistically significant improvements in mean TPI score were reported with the 67% and 45% NaHCO3 dentifrices compared with the 0% NaHCO3 dentifrice (p = 0.0003 and p = 0.0005, respectively). In Study 2, improvements in mean TPI score were statistically significantly greater with the 67% NaHCO3 dentifrice compared with both 0% NaHCO3 dentifrices (p < 0.0001 for both comparisons). All dentifrices were generally well tolerated. A single timed brushing with commercially available dentifrices containing 67% or 45% NaHCO3 exerted a significantly greater effect on plaque removal than commercially available dentifrices without NaHCO3.
Bell, Steven E J; Sirimuthu, Narayana M S
2004-11-01
Rapid, quantitative SERS analysis of nicotine at ppm/ppb levels has been carried out using stable and inexpensive polymer-encapsulated Ag nanoparticles (gel-colls). The strongest nicotine band (1030 cm(-1)) was measured against d(5)-pyridine internal standard (974 cm(-1)) which was introduced during preparation of the stock gel-colls. Calibration plots of I(nic)/I(pyr) against the concentration of nicotine were non-linear but plotting I(nic)/I(pyr) against [nicotine](x)(x = 0.6-0.75, depending on the exact experimental conditions) gave linear calibrations over the range (0.1-10 ppm) with R(2) typically ca. 0.998. The RMS prediction error was found to be 0.10 ppm when the gel-colls were used for quantitative determination of unknown nicotine samples in 1-5 ppm level. The main advantages of the method are that the gel-colls constitute a highly stable and reproducible SERS medium that allows high throughput (50 sample h(-1)) measurements.
On the timing problem in optical PPM communications.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1971-01-01
Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
Spectral purity study for IPDA lidar measurement of CO2
NASA Astrophysics Data System (ADS)
Ma, Hui; Liu, Dong; Xie, Chen-Bo; Tan, Min; Deng, Qian; Xu, Ji-Wei; Tian, Xiao-Min; Wang, Zhen-Zhu; Wang, Bang-Xin; Wang, Ying-Jian
2018-02-01
A high sensitivity and global covered observation of carbon dioxide (CO2) is expected by space-borne integrated path differential absorption (IPDA) lidar which has been designed as the next generation measurement. The stringent precision of space-borne CO2 data, for example 1ppm or better, is required to address the largest number of carbon cycle science questions. Spectral purity, which is defined as the ratio of effective absorbed energy to the total energy transmitted, is one of the most important system parameters of IPDA lidar which directly influences the precision of CO2. Due to the column averaged dry air mixing ratio of CO2 is inferred from comparison of the two echo pulse signals, the laser output usually accompanied by an unexpected spectrally broadband background radiation would posing significant systematic error. In this study, the spectral energy density line shape and spectral impurity line shape are modeled as Lorentz line shape for the simulation, and the latter is assumed as an unabsorbed component by CO2. An error equation is deduced according to IPDA detecting theory for calculating the system error caused by spectral impurity. For a spectral purity of 99%, the induced error could reach up to 8.97 ppm.
Optical Phase Recovery and Locking in a PPM Laser Communication Link
NASA Technical Reports Server (NTRS)
Aveline, David C.; Yu, Nan; Farr, William H.
2012-01-01
Free-space optical communication holds great promise for future space missions requiring high data rates. For data communication in deep space, the current architecture employs pulse position modulation (PPM). In this scheme, the light is transmitted and detected as pulses within an array of time slots. While the PPM method is efficient for data transmission, the phase of the laser light is not utilized. The phase coherence of a PPM optical signal has been investigated with the goal of developing a new laser communication and ranging scheme that utilizes optical coherence within the established PPM architecture and photon-counting detection (PCD). Experimental measurements of a PPM modulated optical signal were conducted, and modeling code was developed to generate random PPM signals and simulate spectra via FFT (Fast Fourier Transform) analysis. The experimental results show very good agreement with the simulations and confirm that coherence is preserved despite modulation with high extinction ratios and very low duty cycles. A real-time technique has been developed to recover the phase information through the mixing of a PPM signal with a frequency-shifted local oscillator (LO). This mixed signal is amplified, filtered, and integrated to generate a voltage proportional to the phase of the modulated signal. By choosing an appropriate time constant for integration, one can maintain a phase lock despite long dark times between consecutive pulses with low duty cycle. A proof-of-principle demonstration was first achieved with an RF-based PPM signal and test setup. With the same principle method, an optical carrier within a PPM modulated laser beam could also be tracked and recovered. A reference laser was phase-locked to an independent pulsed laser signal with low-duty-cycle pseudo-random PPM codes. In this way, the drifting carrier frequency in the primary laser source is tracked via its phase change in the mixed beat note, while the corresponding voltage feedback maintains the phase lock between the two laser sources. The novelty and key significance of this work is that the carrier phase information can be harnessed within an optical communication link based on PPM-PCD architecture. This technology development could lead to quantum-limited efficient performance within the communication link itself, as well as enable high-resolution optical tracking capabilities for planetary science and spacecraft navigation.
Effect of Raw Milk on Lactose Intolerance: A Randomized Controlled Pilot Study
Mummah, Sarah; Oelrich, Beibei; Hope, Jessica; Vu, Quyen; Gardner, Christopher D.
2014-01-01
PURPOSE This pilot study aimed to determine whether raw milk reduces lactose malabsorption and/or lactose intolerance symptoms relative to pasteurized milk. METHODS We performed a crossover trial involving 16 adults with self-reported lactose intolerance and lactose malabsorption confirmed by hydrogen (H2) breath testing. Participants underwent 3, 8-day milk phases (raw vs 2 controls: pasteurized, soy) in randomized order separated by 1-week washout periods. On days 1 and 8 of each phase, milk consumption was 473 mL (16 oz); on days 2 to 7, milk dosage increased daily by 118 mL (4 oz), beginning with 118 mL (4 oz) on day 2 and reaching 710 mL (24 oz) on day 7. Outcomes were area under the breath H2 curve (AUC ∆H2) and self-reported symptom severity (visual analog scales: flatulence/gas, audible bowel sounds, abdominal cramping, diarrhea). RESULTS AUC ∆H2 (mean ± standard error of the mean) was higher for raw vs pasteurized on day 1 (113 ± 21 vs 71 ± 12 ppm·min·10−2, respectively, P = .01) but not day 8 (72 ± 14 vs 74 ± 15 ppm·min·10−2, respectively, P = .9). Symptom severities were not different for raw vs pasteurized on day 7 with the highest dosage (P >.7). AUC ∆H2 and symptom severities were higher for both dairy milks compared with soy milk. CONCLUSIONS Raw milk failed to reduce lactose malabsorption or lactose intolerance symptoms compared with pasteurized milk among adults positive for lactose malabsorption. These results do not support widespread anecdotal claims that raw milk reduces the symptoms of lactose intolerance. PMID:24615309
Effect of raw milk on lactose intolerance: a randomized controlled pilot study.
Mummah, Sarah; Oelrich, Beibei; Hope, Jessica; Vu, Quyen; Gardner, Christopher D
2014-01-01
This pilot study aimed to determine whether raw milk reduces lactose malabsorption and/or lactose intolerance symptoms relative to pasteurized milk. We performed a crossover trial involving 16 adults with self-reported lactose intolerance and lactose malabsorption confirmed by hydrogen (H2) breath testing. Participants underwent 3, 8-day milk phases (raw vs 2 controls: pasteurized, soy) in randomized order separated by 1-week washout periods. On days 1 and 8 of each phase, milk consumption was 473 mL (16 oz); on days 2 to 7, milk dosage increased daily by 118 mL (4 oz), beginning with 118 mL (4 oz) on day 2 and reaching 710 mL (24 oz) on day 7. Outcomes were area under the breath H2 curve (AUC H2) and self-reported symptom severity (visual analog scales: flatulence/gas, audible bowel sounds, abdominal cramping, diarrhea). AUC H2 (mean ± standard error of the mean) was higher for raw vs pasteurized on day 1 (113 ± 21 vs 71 ± 12 ppm·min·10(-2), respectively, P = .01) but not day 8 (72 ± 14 vs 74 ± 15 ppm·min·10(-2), respectively, P = .9). Symptom severities were not different for raw vs pasteurized on day 7 with the highest dosage (P >.7). AUC H2 and symptom severities were higher for both dairy milks compared with soy milk. Raw milk failed to reduce lactose malabsorption or lactose intolerance symptoms compared with pasteurized milk among adults positive for lactose malabsorption. These results do not support widespread anecdotal claims that raw milk reduces the symptoms of lactose intolerance.
Optically powered oil tank multichannel detection system with optical fiber link
NASA Astrophysics Data System (ADS)
Yu, Zhijing
1998-08-01
A novel oil tanks integrative parameters measuring system with optically powered are presented. To realize optical powered and micro-power consumption multiple channels and parameters detection, the system has taken the PWM/PPM modulation, ratio measurement, time division multiplexing and pulse width division multiplexing techniques. Moreover, the system also used special pulse width discriminator and single-chip microcomputer to accomplish signal pulse separation, PPM/PWM signal demodulation, the error correction of overlapping pulse and data processing. This new transducer has provided with high characteristics: experimental transmitting distance is 500m; total consumption of the probes is less than 150 (mu) W; measurement error: +/- 0.5 degrees C and +/- 0.2 percent FS. The measurement accuracy of the liquid level and reserves is mainly determined by the pressure accuracy. Finally, some points of the experiment are given.
A Regional CO2 Observing System Simulation Experiment for the ASCENDS Satellite Mission
NASA Technical Reports Server (NTRS)
Wang, J. S.; Kawa, S. R.; Eluszkiewicz, J.; Baker, D. F.; Mountain, M.; Henderson, J.; Nehrkorn, T.; Zaccheo, T. S.
2014-01-01
Top-down estimates of the spatiotemporal variations in emissions and uptake of CO2 will benefit from the increasing measurement density brought by recent and future additions to the suite of in situ and remote CO2 measurement platforms. In particular, the planned NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) satellite mission will provide greater coverage in cloudy regions, at high latitudes, and at night than passive satellite systems, as well as high precision and accuracy. In a novel approach to quantifying the ability of satellite column measurements to constrain CO2 fluxes, we use a portable library of footprints (surface influence functions) generated by the WRF-STILT Lagrangian transport model in a regional Bayesian synthesis inversion. The regional Lagrangian framework is well suited to make use of ASCENDS observations to constrain fluxes at high resolution, in this case at 1 degree latitude x 1 degree longitude and weekly for North America. We consider random measurement errors only, modeled as a function of mission and instrument design specifications along with realistic atmospheric and surface conditions. We find that the ASCENDS observations could potentially reduce flux uncertainties substantially at biome and finer scales. At the 1 degree x 1 degree, weekly scale, the largest uncertainty reductions, on the order of 50 percent, occur where and when there is good coverage by observations with low measurement errors and the a priori uncertainties are large. Uncertainty reductions are smaller for a 1.57 micron candidate wavelength than for a 2.05 micron wavelength, and are smaller for the higher of the two measurement error levels that we consider (1.0 ppm vs. 0.5 ppm clear-sky error at Railroad Valley, Nevada). Uncertainty reductions at the annual, biome scale range from 40 percent to 75 percent across our four instrument design cases, and from 65 percent to 85 percent for the continent as a whole. Our uncertainty reductions at various scales are substantially smaller than those from a global ASCENDS inversion on a coarser grid, demonstrating how quantitative results can depend on inversion methodology. The a posteriori flux uncertainties we obtain, ranging from 0.01 to 0.06 Pg C yr-1 across the biomes, would meet requirements for improved understanding of long-term carbon sinks suggested by a previous study.
Ouchi, Kentaro; Sugiyama, Kazuna
2016-01-01
Dexmedetomidine (DEX) dose dependently enhances the local anesthetic action of lidocaine in rats. We hypothesized that the effect might also be dose dependent in humans. We evaluated the effect of various concentrations of DEX with a local anesthetic in humans. Eighteen healthy volunteers were randomly assigned by a computer to receive 1.8 mL of 1 of 4 drug combinations: (1) 1% lidocaine with 2.5 ppm (parts per million) (4.5 μg) DEX, (2) lidocaine with 5.0 ppm (9.0 μg) DEX, (3) lidocaine with 7.5 ppm (13.5μg) DEX, or (4) lidocaine with 1:80,000 (22.5 μg) adrenaline (AD), to produce inferior alveolar nerve block. Pulp latency and lower lip numbness (for assessing onset and duration of anesthesia) were tested, and sedation level, blood pressure, and heart rate were recorded every 5 minutes for 20 minutes, and every 10 minutes from 20 to 60 minutes. Pulp latency of each tooth increased compared with baseline, from 5 to 15 minutes until 60 minutes. There were no significant intergroup differences at any time point. Anesthesia onset was not different between groups. Anesthesia duration was different between groups (that with DEX 7.5 ppm was significantly longer than that with DEX 2.5 ppm and AD; there was no difference between DEX 2.5 ppm and AD). Blood pressure decreased from baseline in the 5.0 and 7.5 ppm DEX groups at 30 to 60 minutes, although there was no hypotension; moreover, heart rate did not change in any group. Sedation score did not indicate deep sedation in any of the groups. Dexmedetomidine dose dependently enhances the local anesthetic action of lidocaine in humans. Dexmedetomidine at 2.5 ppm produces similar enhancement of local anesthesia effect as addition of 1:80,000 AD.
Refaat, Tamer F; Singh, Upendra N; Yu, Jirong; Petros, Mulugeta; Remus, Ruben; Ismail, Syed
2016-05-20
Field experiments were conducted to test and evaluate the initial atmospheric carbon dioxide (CO2) measurement capability of airborne, high-energy, double-pulsed, 2-μm integrated path differential absorption (IPDA) lidar. This IPDA was designed, integrated, and operated at the NASA Langley Research Center on-board the NASA B-200 aircraft. The IPDA was tuned to the CO2 strong absorption line at 2050.9670 nm, which is the optimum for lower tropospheric weighted column measurements. Flights were conducted over land and ocean under different conditions. The first validation experiments of the IPDA for atmospheric CO2 remote sensing, focusing on low surface reflectivity oceanic surface returns during full day background conditions, are presented. In these experiments, the IPDA measurements were validated by comparison to airborne flask air-sampling measurements conducted by the NOAA Earth System Research Laboratory. IPDA performance modeling was conducted to evaluate measurement sensitivity and bias errors. The IPDA signals and their variation with altitude compare well with predicted model results. In addition, off-off-line testing was conducted, with fixed instrument settings, to evaluate the IPDA systematic and random errors. Analysis shows an altitude-independent differential optical depth offset of 0.0769. Optical depth measurement uncertainty of 0.0918 compares well with the predicted value of 0.0761. IPDA CO2 column measurement compares well with model-driven, near-simultaneous air-sampling measurements from the NOAA aircraft at different altitudes. With a 10-s shot average, CO2 differential optical depth measurement of 1.0054±0.0103 was retrieved from a 6-km altitude and a 4-GHz on-line operation. As compared to CO2 weighted-average column dry-air volume mixing ratio of 404.08 ppm, derived from air sampling, IPDA measurement resulted in a value of 405.22±4.15 ppm with 1.02% uncertainty and 0.28% additional bias. Sensitivity analysis of environmental systematic errors correlates the additional bias to water vapor. IPDA ranging resulted in a measurement uncertainty of <3 m.
Timing performance of phased-locked loops in optical pulse position modulation communication systems
NASA Technical Reports Server (NTRS)
Lafaw, D. A.; Gardner, C. S.
1984-01-01
An optical digital communication system requires that an accurate clock signal be available at the receiver for proper synchronization with the transmitted signal. Phase synchronization is especially critical in M-ary pulse position modulation (PPM) systems where the optimum decision scheme is an energy detector which compares the energy in each of M time slots to decide which of M possible words was sent. Timing errors cause energy spillover into adjacent time slots (a form of intersymbol interference) so that only a portion of the signal energy may be attributed to the correct time slot. This effect decreases the effective signal, increases the effective noise, and increases the probability of error. A timing subsystem for a satellite-to-satellite optical PPM communication link is simulated. The receiver employs direct photodetection, preprocessing of the detected signal, and a phase-locked loop for timing synchronization. The variance of the relative phase error is examined under varying signal strength conditions as an indication of loop performance, and simulation results are compared to theoretical calculations.
Timing performance of phased-locked loops in optical pulse position modulation communication systems
NASA Astrophysics Data System (ADS)
Lafaw, D. A.; Gardner, C. S.
1984-08-01
An optical digital communication system requires that an accurate clock signal be available at the receiver for proper synchronization with the transmitted signal. Phase synchronization is especially critical in M-ary pulse position modulation (PPM) systems where the optimum decision scheme is an energy detector which compares the energy in each of M time slots to decide which of M possible words was sent. Timing errors cause energy spillover into adjacent time slots (a form of intersymbol interference) so that only a portion of the signal energy may be attributed to the correct time slot. This effect decreases the effective signal, increases the effective noise, and increases the probability of error. A timing subsystem for a satellite-to-satellite optical PPM communication link is simulated. The receiver employs direct photodetection, preprocessing of the detected signal, and a phase-locked loop for timing synchronization. The variance of the relative phase error is examined under varying signal strength conditions as an indication of loop performance, and simulation results are compared to theoretical calculations.
Optical communication with semiconductor laser diodes
NASA Technical Reports Server (NTRS)
Davidson, F.
1988-01-01
Slot timing recovery in a direct detection optical PPM communication system can be achieved by processing the photodetector waveform with a nonlinear device whose output forms the input to a phase lock group. The choice of a simple transition detector as the nonlinearity is shown to give satisfactory synchronization performance. The rms phase error of the recovered slot clock and the effect of slot timing jitter on the bit error probability were directly measured. The experimental system consisted of an AlGaAs laser diode (lambda = 834 nm) and a silicon avalanche photodiode (APD) photodetector and used Q=4 PPM signaling operated at a source data rate of 25 megabits/second. The mathematical model developed to characterize system performance is shown to be in good agreement with actual performance measurements. The use of the recovered slot clock in the receiver resulted in no degradation in receiver sensitivity compared to a system with perfect slot timing. The system achieved a bit error probability of 10 to the minus 6 power at received signal energies corresponding to an average of less than 60 detected photons per information bit.
Tamburini, Elena; Tagliati, Chiara; Bonato, Tiziano; Costa, Stefania; Scapoli, Chiara; Pedrini, Paola
2016-01-01
Near-infrared spectroscopy (NIRS) has been widely used for quantitative and/or qualitative determination of a wide range of matrices. The objective of this study was to develop a NIRS method for the quantitative determination of fluorine content in polylactide (PLA)-talc blends. A blending profile was obtained by mixing different amounts of PLA granules and talc powder. The calibration model was built correlating wet chemical data (alkali digestion method) and NIR spectra. Using FT (Fourier Transform)-NIR technique, a Partial Least Squares (PLS) regression model was set-up, in a concentration interval of 0 ppm of pure PLA to 800 ppm of pure talc. Fluorine content prediction (R2cal = 0.9498; standard error of calibration, SEC = 34.77; standard error of cross-validation, SECV = 46.94) was then externally validated by means of a further 15 independent samples (R2EX.V = 0.8955; root mean standard error of prediction, RMSEP = 61.08). A positive relationship between an inorganic component as fluorine and NIR signal has been evidenced, and used to obtain quantitative analytical information from the spectra. PMID:27490548
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulawik, S. S.; Worden, J. R.; Wofsy, S. C.
2012-01-01
Comparisons are made between mid-tropospheric Tropospheric Emission Spectrometer (TES) carbon dioxide (CO{sub 2}) satellite measurements and ocean profiles from three Hiaper Pole-to-Pole Observations (HIPPO) campaigns and land aircraft profiles from the United States Southern Great Plains (SGP) Atmospheric Radiation Measurement (ARM) site over a 4-yr period. These comparisons are used to characterize the bias in the TES CO{sub 2} estimates and to assess whether calculated and actual uncertainties and sensitivities are consistent. The HIPPO dataset is one of the few datasets spanning the altitude range where TES CO{sub 2} estimates are sensitive, which is especially important for characterization of biases.more » We find that TES CO{sub 2} estimates capture the seasonal and latitudinal gradients observed by HIPPO CO{sub 2} measurements; actual errors range from 0.8–1.2 ppm, depending on the campaign, and are approximately 1.4 times larger than the predicted errors. The bias of TES versus HIPPO is within 0.85 ppm for each of the 3 campaigns; however several of the sub-tropical TES CO{sub 2} estimates are lower than expected based on the calculated errors. Comparisons of aircraft flask profiles, which are measured from the surface to 5 km, to TES CO{sub 2} at the SGP ARM site show good agreement with an overall bias of 0.1 ppm and rms of 1.0 ppm. We also find that the predicted sensitivity of the TES CO{sub 2} estimates is too high, which results from using a multi-step retrieval for CO{sub 2} and temperature. We find that the averaging kernel in the TES product corrected by a pressure-dependent factor accurately reflects the sensitivity of the TES CO{sub 2} product.« less
NASA Astrophysics Data System (ADS)
Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.
2018-01-01
Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.
Du, Juan; Zhu, Yadan; Li, Shiguang; Zhang, Junxuan; Sun, Yanguang; Zang, Huaguo; Liu, Dan; Ma, Xiuhua; Bi, Decang; Liu, Jiqiao; Zhu, Xiaolei; Chen, Weibiao
2017-09-01
A ground-based double-pulse integrated path differential absorption (IPDA) instrument for carbon dioxide (CO 2 ) concentration measurements at 1572 nm has been developed. A ground experiment was implemented under different conditions with a known wall located about 1.17 km away acting as the scattering hard target. Off-/offline testing of a laser transmitter was conducted to estimate the instrument systematic and random errors. Results showed a differential absorption optical depth (DAOD) offset of 0.0046 existing in the instrument. On-/offline testing was done to achieve the actual DAOD resulting from the CO 2 absorption. With 18 s pulses average, it demonstrated that a CO 2 concentration measurement of 432.71±2.42 ppm with 0.56% uncertainty was achieved. The IPDA ranging led to a measurement uncertainty of 1.5 m.
Composting Explosives/Organics Contaminated Soils
1986-05-01
29 144. Quantitation of C Trapped by Activated Carbon . ... 29 5. Preliminary Extraction Trials .... ........ ..... . 30 6. Tetryl Product...ppm (standard deviation 1892 ppm). All samples of soil from Letterkenny AD were pooled to yield one composite sample. Pooled samples from Louisiana...combustion efficiency, and counting efficiency. 4. Quantitation of 14 C Trapped by Activated Carbon Random subsamples of carbon from the air intake
NASA Astrophysics Data System (ADS)
Du, Juan; Liu, Jiqiao; Bi, Decang; Ma, Xiuhua; Hou, Xia; Zhu, Xiaolei; Chen, Weibiao
2018-04-01
A ground-based double-pulse 1572 nm integrated path differential absorption (IPDA) lidar was developed for carbon dioxide (CO2) column concentrations measurement. The lidar measured the CO2 concentrations continuously by receiving the scattered echo signal from a building about 1300 m away. The other two instruments of TDLAS and in-situ CO2 analyzer measured the CO2 concentrations on the same time. A CO2 concentration measurement of 430 ppm with 1.637 ppm standard error was achieved.
Adaptive Optics Communications Performance Analysis
NASA Technical Reports Server (NTRS)
Srinivasan, M.; Vilnrotter, V.; Troy, M.; Wilson, K.
2004-01-01
The performance improvement obtained through the use of adaptive optics for deep-space communications in the presence of atmospheric turbulence is analyzed. Using simulated focal-plane signal-intensity distributions, uncoded pulse-position modulation (PPM) bit-error probabilities are calculated assuming the use of an adaptive focal-plane detector array as well as an adaptively sized single detector. It is demonstrated that current practical adaptive optics systems can yield performance gains over an uncompensated system ranging from approximately 1 dB to 6 dB depending upon the PPM order and background radiation level.
An effective temperature compensation approach for ultrasonic hydrogen sensors
NASA Astrophysics Data System (ADS)
Tan, Xiaolong; Li, Min; Arsad, Norhana; Wen, Xiaoyan; Lu, Haifei
2018-03-01
Hydrogen is a kind of promising clean energy resource with a wide application prospect, which will, however, cause a serious security issue upon the leakage of hydrogen gas. The measurement of its concentration is of great significance. In a traditional approach of ultrasonic hydrogen sensing, a temperature drift of 0.1 °C results in a concentration error of about 250 ppm, which is intolerable for trace amount of gas sensing. In order to eliminate the influence brought by temperature drift, we propose a feasible approach named as linear compensation algorithm, which utilizes the linear relationship between the pulse count and temperature to compensate for the pulse count error (ΔN) caused by temperature drift. Experimental results demonstrate that our proposed approach is capable of improving the measurement accuracy and can easily detect sub-100 ppm of hydrogen concentration under variable temperature conditions.
Castillo, Andrés M; Bernal, Andrés; Dieden, Reiner; Patiny, Luc; Wist, Julien
2016-01-01
We present "Ask Ernö", a self-learning system for the automatic analysis of NMR spectra, consisting of integrated chemical shift assignment and prediction tools. The output of the automatic assignment component initializes and improves a database of assigned protons that is used by the chemical shift predictor. In turn, the predictions provided by the latter facilitate improvement of the assignment process. Iteration on these steps allows Ask Ernö to improve its ability to assign and predict spectra without any prior knowledge or assistance from human experts. This concept was tested by training such a system with a dataset of 2341 molecules and their (1)H-NMR spectra, and evaluating the accuracy of chemical shift predictions on a test set of 298 partially assigned molecules (2007 assigned protons). After 10 iterations, Ask Ernö was able to decrease its prediction error by 17 %, reaching an average error of 0.265 ppm. Over 60 % of the test chemical shifts were predicted within 0.2 ppm, while only 5 % still presented a prediction error of more than 1 ppm. Ask Ernö introduces an innovative approach to automatic NMR analysis that constantly learns and improves when provided with new data. Furthermore, it completely avoids the need for manually assigned spectra. This system has the potential to be turned into a fully autonomous tool able to compete with the best alternatives currently available.Graphical abstractSelf-learning loop. Any progress in the prediction (forward problem) will improve the assignment ability (reverse problem) and vice versa.
Cho, Yuichiro; Horiuchi, Misa; Shibasaki, Kazuo; Kameda, Shingo; Sugita, Seiji
2017-08-01
In situ radiogenic isotope measurements to obtain the absolute age of geologic events on planets are of great scientific value. In particular, K-Ar isochrons are useful because of their relatively high technical readiness and high accuracy. Because this isochron method involves spot-by-spot K measurements using laser-induced breakdown spectroscopy (LIBS) and simultaneous Ar measurements with mass spectrometry, LIBS measurements are conducted under a high vacuum condition in which emission intensity decreases significantly. Furthermore, using a laser power used in previous planetary missions is preferable to examine the technical feasibility of this approach. However, there have been few LIBS measurements for K under such conditions. In this study, we measured K contents in rock samples using 30 mJ and 15 mJ energy lasers under a vacuum condition (10 -3 Pa) to assess the feasibility of in situ K-Ar dating with lasers comparable to those used in NASA's Curiosity and Mars 2020 missions. We obtained various calibration curves for K using internal normalization with the oxygen line at 777 nm and continuum emission from the laser-induced plasma. Experimental results indicate that when K 2 O < 1.1 wt%, a calibration curve using the intensity of the K emission line at 769 nm normalized with that of the oxygen line yields the best results for the 30 mJ laser energy, with a detection limit of 88 ppm and 20% of error at 2400 ppm of K 2 O. Futhermore, the calibration curve based on the K 769 nm line intensity normalized with continuum emission yielded the best result for the 15 mJ laser, giving a detection limit of 140 ppm and 20% error at 3400 ppm K 2 O. Error assessments using obtained calibration models indicate that a 4 Ga rock with 3000 ppm K 2 O would be measured with 8% (30 mJ) and 10% (15 mJ) of precision in age when combined with mass spectrometry of 40 Ar with 10% of uncertainty. These results strongly suggest that high precision in situ isochron K-Ar dating is feasible with a laser used in previous and upcoming Mars rover missions.
Errors in the Calculation of 27Al Nuclear Magnetic Resonance Chemical Shifts
Wang, Xianlong; Wang, Chengfei; Zhao, Hui
2012-01-01
Computational chemistry is an important tool for signal assignment of 27Al nuclear magnetic resonance spectra in order to elucidate the species of aluminum(III) in aqueous solutions. The accuracy of the popular theoretical models for computing the 27Al chemical shifts was evaluated by comparing the calculated and experimental chemical shifts in more than one hundred aluminum(III) complexes. In order to differentiate the error due to the chemical shielding tensor calculation from that due to the inadequacy of the molecular geometry prediction, single-crystal X-ray diffraction determined structures were used to build the isolated molecule models for calculating the chemical shifts. The results were compared with those obtained using the calculated geometries at the B3LYP/6-31G(d) level. The isotropic chemical shielding constants computed at different levels have strong linear correlations even though the absolute values differ in tens of ppm. The root-mean-square difference between the experimental chemical shifts and the calculated values is approximately 5 ppm for the calculations based on the X-ray structures, but more than 10 ppm for the calculations based on the computed geometries. The result indicates that the popular theoretical models are adequate in calculating the chemical shifts while an accurate molecular geometry is more critical. PMID:23203134
Low Power Operation of Temperature-Modulated Metal Oxide Semiconductor Gas Sensors.
Burgués, Javier; Marco, Santiago
2018-01-25
Mobile applications based on gas sensing present new opportunities for low-cost air quality monitoring, safety, and healthcare. Metal oxide semiconductor (MOX) gas sensors represent the most prominent technology for integration into portable devices, such as smartphones and wearables. Traditionally, MOX sensors have been continuously powered to increase the stability of the sensing layer. However, continuous power is not feasible in many battery-operated applications due to power consumption limitations or the intended intermittent device operation. This work benchmarks two low-power, duty-cycling, and on-demand modes against the continuous power one. The duty-cycling mode periodically turns the sensors on and off and represents a trade-off between power consumption and stability. On-demand operation achieves the lowest power consumption by powering the sensors only while taking a measurement. Twelve thermally modulated SB-500-12 (FIS Inc. Jacksonville, FL, USA) sensors were exposed to low concentrations of carbon monoxide (0-9 ppm) with environmental conditions, such as ambient humidity (15-75% relative humidity) and temperature (21-27 °C), varying within the indicated ranges. Partial Least Squares (PLS) models were built using calibration data, and the prediction error in external validation samples was evaluated during the two weeks following calibration. We found that on-demand operation produced a deformation of the sensor conductance patterns, which led to an increase in the prediction error by almost a factor of 5 as compared to continuous operation (2.2 versus 0.45 ppm). Applying a 10% duty-cycling operation of 10-min periods reduced this prediction error to a factor of 2 (0.9 versus 0.45 ppm). The proposed duty-cycling powering scheme saved up to 90% energy as compared to the continuous operating mode. This low-power mode may be advantageous for applications that do not require continuous and periodic measurements, and which can tolerate slightly higher prediction errors.
Timing performance of phase-locked loops in optical pulse position modulation communication systems
NASA Astrophysics Data System (ADS)
Lafaw, D. A.
In an optical digital communication system, an accurate clock signal must be available at the receiver to provide proper synchronization with the transmitted signal. Phase synchronization is especially critical in M-ary pulse position modulation (PPM) systems where the optimum decision scheme is an energy detector which compares the energy in each of M time slots to decide which of M possible words was sent. A timing error causes energy spillover into adjacent time slots (a form of intersymbol interference) so that only a portion of the signal energy may be attributed to the correct time slot. This effect decreases the effective signal, increases the effective noise, and increases the probability of error. This report simulates a timing subsystem for a satellite-to-satellite optical PPM communication link. The receiver employs direct photodetection, preprocessing of the optical signal, and a phase-locked loop for timing synchronization. The photodetector output is modeled as a filtered, doubly stochastic Poisson shot noise process. The variance of the relative phase error is examined under varying signal strength conditions as an indication of loop performance, and simulation results are compared to theoretical relations.
Comparing three toothpastes in controlling plaque and gingivitis: A 6-month clinical study.
Triratana, Terdphong; Kraivaphan, Petcharat; Amornchat, Cholticha; Mateo, Luis R; Morrison, Boyce M; Dibart, Serge; Zhang, Yun-Po
2015-04-01
To investigate the clinical efficacy of three toothpastes in controlling established gingivitis and plaque over 6 months. 135 subjects were enrolled in a single-center, double-blind, parallel group, randomized clinical study. Subjects were randomly assigned to one of three treatments: triclosan/copolymer/fluoride dentifrice containing 0.3% triclosan, 2.0% copolymer and 1,450 ppm F as sodium fluoride in a silica base; herbal/bicarbonate dentifrice containing herbal extract and 1,400 ppm F as sodium fluoride in a sodium bicarbonate base; or fluoride dentifrice containing 450 ppm F as sodium fluoride, and 1,000 ppm F as sodium monofluorophosphate. Subjects were instructed to brush their teeth twice daily for 1 minute for 6 months. After 6 months, subjects assigned to the triclosan/copolymer/fluoride group exhibited statistically significant reductions in gingival index scores and plaque index scores as compared to subjects assigned to the herbal/bicarbonate group by 35.4% and 48.9%, respectively. There were no statistically significant differences in gingival index and plaque index between subjects in the herbal/ bicarbonate group and those in the fluoride group. The triclosan/copolymer/fluoride dentifrice was statistically significantly more effective in reducing gingivitis and dental plaque than the herbal/bicarbonate dentifrice, and this difference in efficacy was clinically meaningful.
Benchmark fragment-based 1H, 13C, 15N and 17O chemical shift predictions in molecular crystals†
Hartman, Joshua D.; Kudla, Ryan A.; Day, Graeme M.; Mueller, Leonard J.; Beran, Gregory J. O.
2016-01-01
The performance of fragment-based ab initio 1H, 13C, 15N and 17O chemical shift predictions is assessed against experimental NMR chemical shift data in four benchmark sets of molecular crystals. Employing a variety of commonly used density functionals (PBE0, B3LYP, TPSSh, OPBE, PBE, TPSS), we explore the relative performance of cluster, two-body fragment, and combined cluster/fragment models. The hybrid density functionals (PBE0, B3LYP and TPSSh) generally out-perform their generalized gradient approximation (GGA)-based counterparts. 1H, 13C, 15N, and 17O isotropic chemical shifts can be predicted with root-mean-square errors of 0.3, 1.5, 4.2, and 9.8 ppm, respectively, using a computationally inexpensive electrostatically embedded two-body PBE0 fragment model. Oxygen chemical shieldings prove particularly sensitive to local many-body effects, and using a combined cluster/fragment model instead of the simple two-body fragment model decreases the root-mean-square errors to 7.6 ppm. These fragment-based model errors compare favorably with GIPAW PBE ones of 0.4, 2.2, 5.4, and 7.2 ppm for the same 1H, 13C, 15N, and 17O test sets. Using these benchmark calculations, a set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided and their robustness assessed using statistical cross-validation. We demonstrate the utility of these approaches and the reported scaling parameters on applications to 9-tertbutyl anthracene, several histidine co-crystals, benzoic acid and the C-nitrosoarene SnCl2(CH3)2(NODMA)2. PMID:27431490
Hartman, Joshua D; Kudla, Ryan A; Day, Graeme M; Mueller, Leonard J; Beran, Gregory J O
2016-08-21
The performance of fragment-based ab initio(1)H, (13)C, (15)N and (17)O chemical shift predictions is assessed against experimental NMR chemical shift data in four benchmark sets of molecular crystals. Employing a variety of commonly used density functionals (PBE0, B3LYP, TPSSh, OPBE, PBE, TPSS), we explore the relative performance of cluster, two-body fragment, and combined cluster/fragment models. The hybrid density functionals (PBE0, B3LYP and TPSSh) generally out-perform their generalized gradient approximation (GGA)-based counterparts. (1)H, (13)C, (15)N, and (17)O isotropic chemical shifts can be predicted with root-mean-square errors of 0.3, 1.5, 4.2, and 9.8 ppm, respectively, using a computationally inexpensive electrostatically embedded two-body PBE0 fragment model. Oxygen chemical shieldings prove particularly sensitive to local many-body effects, and using a combined cluster/fragment model instead of the simple two-body fragment model decreases the root-mean-square errors to 7.6 ppm. These fragment-based model errors compare favorably with GIPAW PBE ones of 0.4, 2.2, 5.4, and 7.2 ppm for the same (1)H, (13)C, (15)N, and (17)O test sets. Using these benchmark calculations, a set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided and their robustness assessed using statistical cross-validation. We demonstrate the utility of these approaches and the reported scaling parameters on applications to 9-tert-butyl anthracene, several histidine co-crystals, benzoic acid and the C-nitrosoarene SnCl2(CH3)2(NODMA)2.
The Impact of Selected Fluoridated Toothpastes on Dental Erosion in Profilometric Measurement.
Fita, Katarzyna; Kaczmarek, Urszula
2016-01-01
Some fluoridated toothpastes, available commercially, are described to have a protective effect against dental erosion. To evaluate the influence of the selected marketed toothpastes on the human enamel exposed to acid beverages. Enamel specimens from extracted human teeth were prepared (n = 40). Specimens were randomly divided into 10 experimental groups, 4 specimens each, which were subjected to acid challenge for 10 min using orange juice (pH 3.79) or Pepsi Cola (pH 2.58) and then immersed for 2 min into a slurry of five marketed toothpastes with distilled water (1 : 3 w/w). The tested toothpastes contained 1450 or 5000 ppm fluoride, CPP-ACP with 900 ppm fluoride, 1450 ppm fluoride with potassium nitrate 5%, all of them as sodium fluoride, and 700 ppm fluoride as amine and sodium fluoride with 3500 ppm SnCl2. Enamel roughness (Ra parameter) by contact profilometer at baseline and after exposure onto soft drinks and slurry was measured. Exposure to both beverages caused a similar increase of enamel surface roughness. After the specimens immersion into slurries of toothpastes with 1450 or 5000 ppm fluoride, 1450 ppm fluoride with potassium nitrate 5% and CPP-ACP with 900 ppm fluoride the significant decrease of Ra values were found, reaching the baseline values. However, toothpaste with 700 ppm fluoride and 3500 ppm SnCl2 did not cause any fall in Ra value, probably due to other mechanism of action. Within the limitation of the study we can conclude that the sodium fluoride toothpastes are able to restore the surface profile of enamel exposed shortly to acidic soft drinks.
Lycopene intake facilitates the increase of bone mineral density in growing female rats.
Iimura, Yuki; Agata, Umon; Takeda, Satoko; Kobayashi, Yuki; Yoshida, Shigeki; Ezawa, Ikuko; Omi, Naomi
2014-01-01
Intake of the antioxidant lycopene has been reported to decrease oxidative stress and have beneficial effects on bone health. However, few in vivo studies have addressed these beneficial effects in growing female rodents or young women. The aim of this study was to investigate the effect of lycopene intake on bone metabolism through circulating oxidative stress in growing female rats. Six-week-old Sprague-Dawley female rats were randomly divided into 3 groups according to the lycopene content in their diet: 0, 50, and 100 ppm. The bone mineral density (BMD) of the lumbar spine and the tibial proximal metaphysis increased with lycopene content in a dose-dependent manner; the BMD in 100 ppm group was significantly higher than in the 0 ppm group. The urine deoxypyridinoline concentrations were significantly lower in the 50 and 100 ppm groups than in the 0 ppm group, and the serum bone-type alkaline phosphatase activity was significantly higher in 100 ppm group than in the 0 ppm group. No difference in systemic oxidative stress level was observed; however, the oxidative stress level inversely correlated with the tibial BMD. Our findings suggested that lycopene intake facilitates bone formation and inhibits bone resorption, leading to an increase of BMD in growing female rats.
Parkinson, C R; Siddiqi, M; Mason, S; Lippert, F; Hara, A T; Zero, D T
2017-06-01
A randomized, investigator-blind, five-treatment, crossover, non-inferiority study was conducted to investigate the effect of the addition of calcium sodium phosphosilicate (CSPS), an agent known to relieve dentin hypersensitivity, to a sodium monofluorophosphate (SMFP)-containing dentifrice on the enamel remineralization potential of fluoride (F), as assessed by percentage surface microhardness recovery (%SMHR) and enamel fluoride uptake (EFU) using a standard in situ caries model. Seventy-seven subjects wearing bilateral mandibular partial dentures holding partially demineralized bovine enamel specimens 24 hours/day brushed their teeth with their assigned randomized dentifrice containing either 1500 or 0 ppm F with 5% CSPS or 1500, 500, or 0 ppm F with 0% CSPS twice daily for 21 days. The success criterion was to observe a difference in % SMHR between dentifrices containing 1500 ppm F of six units or less in the upper bound of the two-sided 95% confidence interval (CI). Following 21 days of treatment, the upper bound CI of the %SMHR difference between the dentifrices containing 1500 ppm F was 1.66, thus within the non-inferiority limit. No statistically significant differences for %SMHR (p = 0.2601) and EFU (p = 0.2984) were noted between these two dentifrices. The present in situ caries study provides evidence demonstrating that the addition of the calcium-containing compound CSPS to a 1500 ppm F dentifrice does not interfere with the ability of fluoride to remineralize surface-softened enamel; i.e., CSPS neither impairs nor improves the potential cariostatic value of SMFP dentifrice.
Settling characteristics of nursery pig manure and nutrient estimation by the hydrometer method.
Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian
2003-05-01
The hydrometer method to measure manure specific gravity and subsequently relate it to manure nutrient contents was examined in this study. It was found that this method might be improved in estimation accuracy if only manure from a single growth stage of pigs was used (e.g., nursery pig manure used here). The total solids (TS) content of the test manure was well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.9944 and 0.9873, respectively. Also observed were good linear correlations between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.9836 and 0.9843, respectively). These correlations were much better than those reported by past researchers, in which lumped data for pigs at different growing stages were used. It may therefore be inferred that developing different linear equations for pigs at different ages should improve the accuracy in manure nutrient estimation using a hydrometer. Also, the error of using the hydrometer method to estimate manure TN and TP was found to increase, from +/- 10% to +/- 50%, with the decrease in TN (from 700 ppm to 100 ppm) and TP (from 130 ppm to 30 ppm) concentrations in the manure. The estimation errors for TN and TP may be larger than 50% if the total solids content is below 0.5%. In addition, the rapid settling of solids has long been considered characteristic of swine manure; however, in this study, the solids settling property appeared to be quite poor for nursery pig manure in that no conspicuous settling occurred after the manure was left statically for 5 hours. This information has not been reported elsewhere in the literature and may need further research to verify.
Low PCB concentrations observed in American eel (Anguilla rostrata) in six Hudson River tributaries
Limburg, K.E.; Machut, L.S.; Jeffers, P.; Schmidt, R.E.
2008-01-01
We analyzed 73 eels, collected in 2004 and 2005 above the head of tide in six Hudson River tributaries, for total PCBs, length, weight, age, and nitrogen stable isotope ratios (??15N). Mean total PCB concentration (wet weight basis) was 0.23 ppm ?? 0.08 (standard error), with a range of 0.008 to 5.4 ppm. A majority of eels (84) had concentrations below 0.25 ppm, and only seven eels (10%) had concentrations exceeding 0.5 ppm. Those eels with higher PCB concentrations were ???12 yr; there was a weak correlation of PCB concentration with ??15N and also with weight. Compared to recent (2003) data from the mainstem of the Hudson River estuary, these results indicate that tributaries are generally much less contaminated with PCBs. We hypothesize that those tributary eels with high PCB concentrations were relatively recent immigrants from the mainstem. Given concern over the possible adverse effects of PCBs on eel reproduction, these tributaries may serve as refugia. Therefore, providing improved access to upland tributaries may be critically important to this species. ?? 2008 Northeastern Naturalist.
Semenov, Valentin A; Samultsev, Dmitry O; Krivdin, Leonid B
2018-02-09
15 N NMR chemical shifts in the representative series of Schiff bases together with their protonated forms have been calculated at the density functional theory level in comparison with available experiment. A number of functionals and basis sets have been tested in terms of a better agreement with experiment. Complimentary to gas phase results, 2 solvation models, namely, a classical Tomasi's polarizable continuum model (PCM) and that in combination with an explicit inclusion of one molecule of solvent into calculation space to form supermolecule 1:1 (SM + PCM), were examined. Best results are achieved with PCM and SM + PCM models resulting in mean absolute errors of calculated 15 N NMR chemical shifts in the whole series of neutral and protonated Schiff bases of accordingly 5.2 and 5.8 ppm as compared with 15.2 ppm in gas phase for the range of about 200 ppm. Noticeable protonation effects (exceeding 100 ppm) in protonated Schiff bases are rationalized in terms of a general natural bond orbital approach. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sidorin, D. I.
2015-12-01
The carbon dioxide (CO2) production intensity by a secondary school student is studied using a nondispersive infrared CO2 logger for different conditions: relaxation, mental stress, and physical stress. CO2 production measured for mental stress is 24% higher than that for relaxation, while CO2 production for physical stress is more than 2.5 times higher than relaxation levels. Dynamics of CO2 concentration in the classroom air is measured for a typical school building. It is shown that even when the classroom is ventilated between classes, CO2 concentration exceeds 2100 parts per million (ppm), which is significantly higher than the recommended limits defined in developed countries. The ability of seventh-grade school students to perform tasks requiring mental concentration is tested under different CO2 concentration conditions (below 1000 ppm and above 2000 ppm). Five-letter word anagrams are used as test tasks. Statistical analysis of the test results revealed a significant reduction in the number of provided correct answers and an increase in the number of errors when CO2 levels exceeded 2000 ppm.
Feasibility study of a space-based high pulse energy 2 μm CO2 IPDA lidar.
Singh, Upendra N; Refaat, Tamer F; Ismail, Syed; Davis, Kenneth J; Kawa, Stephan R; Menzies, Robert T; Petros, Mulugeta
2017-08-10
Sustained high-quality column carbon dioxide (CO 2 ) atmospheric measurements from space are required to improve estimates of regional and continental-scale sources and sinks of CO 2 . Modeling of a space-based 2 μm, high pulse energy, triple-pulse, direct detection integrated path differential absorption (IPDA) lidar was conducted to demonstrate CO 2 measurement capability and to evaluate random and systematic errors. Parameters based on recent technology developments in the 2 μm laser and state-of-the-art HgCdTe (MCT) electron-initiated avalanche photodiode (e-APD) detection system were incorporated in this model. Strong absorption features of CO 2 in the 2 μm region, which allows optimum lower tropospheric and near surface measurements, were used to project simultaneous measurements using two independent altitude-dependent weighting functions with the triple-pulse IPDA. Analysis of measurements over a variety of atmospheric and aerosol models using a variety of Earth's surface target and aerosol loading conditions were conducted. Water vapor (H 2 O) influences on CO 2 measurements were assessed, including molecular interference, dry-air estimate, and line broadening. Projected performance shows a <0.35 ppm precision and a <0.3 ppm bias in low-tropospheric weighted measurements related to column CO 2 optical depth for the space-based IPDA using 10 s signal averaging over the Railroad Valley (RRV) reference surface under clear and thin cloud conditions.
Electrical conductivity and total dissolved solids in urine.
Fazil Marickar, Y M
2010-08-01
The objective of this paper is to study the relevance of electrical conductivity (EC) and total dissolved solids (TDS) in early morning and random samples of urine of urinary stone patients; 2,000 urine samples were studied. The two parameters were correlated with the extent of various urinary concrements. The early morning urine (EMU) and random samples of the patients who attended the urinary stone clinic were analysed routinely. The pH, specific gravity, EC, TDS, redox potential, albumin, sugar and microscopic study of the urinary sediments including red blood cells (RBC), pus cells (PC), crystals, namely calcium oxalate monohydrate (COM), calcium oxalate dihydrate (COD), uric acid (UA), and phosphates and epithelial cells were assessed. The extent of RBC, PC, COM, COD, UA and phosphates was correlated with EC and TDS. The values of EC ranged from 1.1 to 33.9 mS, the mean value being 21.5 mS. TDS ranged from 3,028 to 18,480 ppm, the mean value being 7,012 ppm. The TDS levels corresponded with EC of urine. Both values were significantly higher (P < 0.05) in the EMU samples than the random samples. There was a statistically significant correlation between the level of abnormality in the urinary deposits (r = +0.27, P < 0.05). In samples, where the TDS were more than 12,000 ppm, there were more crystals than those samples containing TDS less than 12,000 ppm. However, there were certain urine samples, where the TDS were over 12,000, which did not contain any urinary crystals. It is concluded that the value of TDS has relevance in the process of stone formation.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
Effect of nano silver and silver nitrate on seed yield of (Ocimum basilicum L.)
2014-01-01
Background The aim of this study was to evaluate the effect of nano silver and silver nitrate on yield of seed in basil plant. The study was carried out in a randomized block design with three replications. Results Four levels of either silver nitrate (0, 100, 200 and 300 ppm) or nano silver (0, 20, 40, and 60 ppm) were sprayed on basil plant at seed growth stage. The results showed that there was no significant difference between 100 ppm of silver nitrate and 60 ppm concentration of nano silver on the shoot silver concentration. However, increasing the concentration of silver nitrate from 100 to 300 ppm caused a decrease in seed yield. In contrast, a raise in the concentration of nano silver from 20 to 60 ppm has led to an improvement in the seed yield. Additionally, the lowest amount of seed yield was found with control plants. Conclusions Finally, with increasing level of silver nitrate, the polyphenol compound content was raised but the enhancing level of nano silver resulting in the reduction of these components. In conclusion, nano silver can be used instead of other compounds of silver. PMID:25383311
Lau, Billy T; Ji, Hanlee P
2017-09-21
RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
The effect of selenium on spoil suitability as root zone material at Navajo Mine, New Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lane, J.R.; Buchanan, B.A.; Ramsey, T.C.
1995-09-01
The root zone suitability limits for spoil Se at Navajo Mine in northwest New Mexico are currently 0.8 ppm total Se and 0.15 ppm hot-water soluble Se. These criteria were largely developed by the Office of Surface Mining using data from the Northern Great Plains. Applying these values, approximately 23% of the spoil volume and 47% of the spoil area sampled at Navajo Mine from 1985 to December 1993 were determined to be unsuitable as root zone material. Secondary Se accumulator plants (Atriplex canescens) growing in both undisturbed and reclaimed areas were randomly sampled for selenium from 1985 to Decembermore » 1993. In most cases the undisturbed soil and reclaimed spoil at these plant sampling sites were sampled for both total and hot-water soluble Se. Selenium values for Atriplex canescens samples collected on the undisturbed sites averaged 0.64 ppm and ranged from 0.20 ppm to 2.5 ppm. Selenium values for the plants growing on spoil ranged from 0.02 ppm to 7.75 ppm and averaged 1.07 ppm. Total and hot-water Se values for spoil averaged 0.66 ppm and 0.06 ppm respectively, and ranged from 0.0 to 14.2 for total Se and 0.0 ppm to 0.72 ppm for hot-water soluble Se. The plant Se values were poorly correlated to both total and hot-water soluble Se values for both soil and spoil. Therefore, predicting suitable guidelines using normal regression techniques was ineffective. Based on background Se levels in native soils, and levels found on reclaimed areas with Atriplex canescens, it is suggested that a total Se level of 2.0 ppm and a hot-water soluble Se level of 0.25 ppm should be used to represent the suitability limits for Se at Navajo Mine. If these Se values are used, it is estimated that less than 1% of the spoil volume would be unsuitable. This volume of spoil seems to be a more accurate estimate of the amount of spoil with unsuitable levels of Se than the estimated 23% using the current guidelines.« less
Quantitation Error in 1H MRS Caused by B1 Inhomogeneity and Chemical Shift Displacement.
Watanabe, Hidehiro; Takaya, Nobuhiro
2017-11-08
The quantitation accuracy in proton magnetic resonance spectroscopy ( 1 H MRS) improves at higher B 0 field. However, a larger chemical shift displacement (CSD) and stronger B 1 inhomogeneity exist. In this work, we evaluate the quantitation accuracy for the spectra of metabolite mixtures in phantom experiments at 4.7T. We demonstrate a position-dependent error in quantitation and propose a correction method by measuring water signals. All experiments were conducted on a whole-body 4.7T magnetic resonance (MR) system with a quadrature volume coil for transmission and reception. We arranged three bottles filled with metabolite solutions of N-acetyl aspartate (NAA) and creatine (Cr) in a vertical row inside a cylindrical phantom filled with water. Peak areas of three singlets of NAA and Cr were measured on three 1 H spectra at three volume of interests (VOIs) inside three bottles. We also measured a series of water spectra with a shifted carrier frequency and measured a reception sensitivity map. The ratios of NAA and Cr at 3.92 ppm to Cr at 3.01 ppm differed amongst the three VOIs in peak area, which leads to a position-dependent error. The nature of slope depicting the relationship between peak areas and the shifted values of frequency was like that between the reception sensitivities and displacement at every VOI. CSD and inhomogeneity of reception sensitivity cause amplitude modulation along the direction of chemical shift on the spectra, resulting in a quantitation error. This error may be more significant at higher B 0 field where CSD and B 1 inhomogeneity are more severe. This error may also occur in reception using a surface coil having inhomogeneous B 1 . Since this type of error is around a few percent, the data should be analyzed with greater attention while discussing small differences in the studies of 1 H MRS.
NASA Astrophysics Data System (ADS)
Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan
2017-06-01
Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.
Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.
Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P
2014-01-01
Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform. © 2013 Society for Risk Analysis.
Nordström, Anna; Birkhed, Dowen
2013-01-01
The aim was to investigate fluoride (F) retention in plaque, saliva and pH drop in plaque using high-F toothpaste (5000 ppm F) or standard toothpaste (1450 ppm F) twice a day or 3-times a day. A method using the toothpaste as a 'lotion' and massaging the buccal surfaces with the fingertip was also evaluated. The investigation had a randomized, single-blinded, cross-over design and 16 subjects participated in six brushing regimes: (1) 5000 ppm F; twice a day, (2) 5000 ppm; 3-times/day, (3) 5000 ppm; twice a day, plus the 'massage' method once a day, (4) 1450 ppm F; twice a day, (5) 1450 ppm; 3-times/day and (6) 1450 ppm; twice a day, plus the 'massage' method once a day. The outcome measure was F retention in plaque, saliva and the plaque-pH change after a sucrose rinse. The highest F concentration was found using high-F toothpaste (No 1-3) and differed significantly from those with 1450 ppm (No 4-6). Brushing with high-F toothpaste 3-times a day (No 2) resulted in a 3.6-times higher F saliva value compared with standard toothpaste twice a day (No 4) (p < 0.001). Increasing the frequency of application, from twice to 3-times a day, increased the F retention in plaque significantly when the two methods for application 3-times a day were pooled (p < 0.05). Brushing with 5000 and 1450 ppm toothpastes twice a day plus the 'massage' once a day resulted in the same F concentration in saliva and plaque as brushing 3-times a day with the same paste. A third application of toothpaste is increasing the F retention and toothpaste as a 'lotion' and massaging the buccal surfaces with the fingertip may be a simple and inexpensive way of delivering F a third time during the day.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fauziah, Faiza, E-mail: faiza.fauziah@gmail.com; Choesin, Devi N., E-mail: faiza.fauziah@gmail.com
2014-03-24
Banten Bay in Indonesia is a coastal area which has been highly affected by human activity. Previous studies have reported the presence of lead (Pb) and copper (Cu) heavy metals in the seawater of this area. This study was conducted to measure the accumulation of Pb and Cu in seawater, sediment, leaf tissue, and root tissue of the seagrass species Enhalus sp. Sampling was conducted at two observation stations in Banten Bay: Station 1 (St.1) was located closer to the coastline and to industrial plants as source of pollution, while Station 2 (St.2) was located farther away offshore. At eachmore » station, three sampling points were established by random sampling. Field sampling was conducted at two different dates, i.e., on 29 May 2012 and 30 June 2012. Samples were processed by wet ashing using concentrated HNO{sub 3} acid and measured using Atomic Absorption Spectrometry (AAS). Accumulation of Pb was only detected in sediment samples in St.1, while Cu was detected in all samples. Average concentrations of Cu in May were as follows: sediment St.1 = 0.731 ppm, sediment St.2 = 0.383 ppm, seawater St.1 = 0.163 ppm, seawater St.2 = 0.174 ppm, leaf St.1 = 0.102 ppm, leaf St.2 = 0.132 ppm, root St.1= 0.139 ppm, and root St.2 = 0.075 ppm. Average measurements of Cu in June were: sediment St.1 = 0.260 ppm, leaf St.1 = 0.335 ppm, leaf St.2 = 0.301 ppm, root St.1= 0.047 ppm, and root St.2 = 0.060 ppm. In June, Cu was undetected in St.2 sediment and seawater at both stations. In May, Cu concentration in seawater exceeded the maximum allowable threshold for water as determined by the Ministry of the Environment. Spatial and temporal variation in Pb and Cu accumulation were most probably affected by distance from source and physical conditions of the environment (e.g., water current and mixing)« less
Optical Communication with Semiconductor Laser Diode. Interim Progress Report. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Davidson, Frederic; Sun, Xiaoli
1989-01-01
Theoretical and experimental performance limits of a free-space direct detection optical communication system were studied using a semiconductor laser diode as the optical transmitter and a silicon avalanche photodiode (APD) as the receiver photodetector. Optical systems using these components are under consideration as replacements for microwave satellite communication links. Optical pulse position modulation (PPM) was chosen as the signal format. An experimental system was constructed that used an aluminum gallium arsenide semiconductor laser diode as the transmitter and a silicon avalanche photodiode photodetector. The system used Q=4 PPM signaling at a source data rate of 25 megabits per second. The PPM signal format requires regeneration of PPM slot clock and word clock waveforms in the receiver. A nearly exact computational procedure was developed to compute receiver bit error rate without using the Gaussion approximation. A transition detector slot clock recovery system using a phase lock loop was developed and implemented. A novel word clock recovery system was also developed. It was found that the results of the nearly exact computational procedure agreed well with actual measurements of receiver performance. The receiver sensitivity achieved was the closest to the quantum limit yet reported for an optical communication system of this type.
Effects of drinking water monochloramine on lipid and thyroid metabolism in healthy men.
Wones, R G; Deck, C C; Stadler, B; Roark, S; Hogg, E; Frohman, L A
1993-01-01
The purpose of this study was to determine whether a 4-week consumption of 1.5L per day of drinking water containing monochloramine at a concentration of 2 ppm (ppm = mg/L) or 15 ppm under controlled conditions would alter parameters of lipid or thyroid metabolism in healthy men. Forty-eight men completed an 8-week protocol during which diet (600 mg cholesterol per day, 40% calories as fat) and other factors known to affect lipid metabolism were controlled. During the first 4 weeks of the protocol, all subjects consumed distilled water. During the second 4 weeks, one-third of the subjects were assigned randomly to drink 1.5 L per day of water containing 2 ppm of monochloramine, to drink 1.5 L per day of water containing 15 ppm monochloramine, or to continue drinking distilled water. Four blood samples were collected from each subject at the end of each 4-week study period. Subjects drinking monochloramine at a concentration of 2 ppm showed no significant changes in total cholesterol, triglycerides, HDL cholesterol, LDL cholesterol, apolipoproteins A1, A2, or B when compared to the distilled water group. Parameters of thyroid function also were unchanged by exposure to monochloramine at this concentration. However, subjects drinking monochloramine at a concentration of 15 ppm experienced an increase in the level of apolipoprotein B. Other parameters of lipid and thyroid metabolism did not change. We conclude that consumption of drinking water containing 2 ppm of monochloramine does not alter parameters of lipid and thyroid metabolism in healthy men. Consumption of water containing 15 ppm monochloramine may be associated with increased levels of plasma apolipoprotein B. PMID:8319653
Characterization of Impulse Radio Intrabody Communication System for Wireless Body Area Networks.
Cai, Zibo; Seyedi, MirHojjat; Zhang, Weiwei; Rivet, Francois; Lai, Daniel T H
2017-01-01
Intrabody communication (IBC) is a promising data communication technique for body area networks. This short-distance communication approach uses human body tissue as the medium of signal propagation. IBC is defined as one of the physical layers for the new IEEE 802.15.6 or wireless body area network (WBAN) standard, which can provide a suitable data rate for real-time physiological data communication while consuming lower power compared to that of radio-frequency protocols such as Bluetooth. In this paper, impulse radio (IR) IBC (IR-IBC) is examined using a field-programmable gate array (FPGA) implementation of an IBC system. A carrier-free pulse position modulation (PPM) scheme is implemented using an IBC transmitter in an FPGA board. PPM is a modulation technique that uses time-based pulse characteristics to encode data based on IR concepts. The transmission performance of the scheme was evaluated through signal propagation measurements of the human arm using 4- and 8-PPM transmitters, respectively. 4 or 8 is the number of symbols during modulations. It was found that the received signal-to-noise ratio (SNR) decreases approximately 8.0 dB for a range of arm distances (5-50 cm) between the transmitter and receiver electrodes with constant noise power and various signal amplitudes. The SNR for the 4-PPM scheme is approximately 2 dB higher than that for the 8-PPM one. In addition, the bit error rate (BER) is theoretically analyzed for the human body channel with additive white Gaussian noise. The 4- and 8-PPM IBC systems have average BER values of 10 -5 and 10 -10 , respectively. The results indicate the superiority of the 8-PPM scheme compared to the 4-PPM one when implementing the IBC system. The performance evaluation of the proposed IBC system will improve further IBC transceiver design.
Christantoni, Maria; Damigos, Dimitris
2018-05-15
In many instances, Contingent Valuation practitioners rely on voluntary monetary contributions, despite the fact that they are deemed to be neither incentive compatible in theory nor demand revealing in practice. The reason is that they are suitable for most field applications and offer benefits that may outweigh their drawbacks. This paper endeavors to contribute to the literature by exploring the effect of donation payments with differing incentive structures and information levels on contingent values and on respondents' uncertainty regarding the donations declared. To this end, a field survey was conducted using a sample of 332 respondents who were randomly assigned to one of three different mechanisms: (1) individual contribution (hereinafter CVM treatment); (2) individual contribution with provision point mechanism (PPM), where the total cost of the project is unknown (hereinafter PPM treatment); and (3) individual contribution with PPM, where the total cost of the project is known (hereinafter PPM-INF treatment). The results indicate that there are no statistically significant differences in willingness to pay (WTP) estimates between the CVM and PPM treatments nor between the PPM and the PPM-INF treatments. The results also indicate that the PPM has a positive effect on respondents' certainty level, but there is no evidence that the certainty level is affected by the project information cost. The results are mixed compared to previous research efforts. Thus, further tests are necessary in field comparisons and under different information environments before any definite recommendations can be made. Copyright © 2017 Elsevier B.V. All rights reserved.
Morphologic study of three collagen materials for body wall repair.
Soiderer, Emily E; Lantz, Gary C; Kazacos, Evelyn A; Hodde, Jason P; Wiegand, Ryan E
2004-05-15
The search for ideal prostheses for body wall repair continues. Synthetic materials such as polypropylene mesh (PPM) are associated with healing complications. A porcine-derived collagen-based material (CBM), small intestinal submucosa (SIS), has been studied for body wall repair. Renal capsule matrix (RCM) and urinary bladder submucosa (UBS) are CBMs not previously evaluated in this application. This is the first implant study using RCM. Full-thickness muscle/fascia ventral abdominal wall defects were repaired with SIS, RCM, UBS, and PPM in rats with omentum and omentectomy. A random complete block design was used to allot implant type to each of 96 rats. Healing was evaluated at 4 and 8 weeks. Adhesion tenacity and surface area were scored. Implant site dimensions were measured at implantation and necropsy. Inflammation, vascularization, and fibrosis were histopathologically scored. Data were compared by analysis of variance (P < 0.05). PPM produced a granulomatous foreign body response in contrast to the organized healing of CBM implants. CBM mean scores were lower than PPM scores for adhesion tenacity, surface area, and inflammation at each follow-up time for rats with omentums (P < 0.02). The CBMs had less tenacity and inflammation than PPM at each follow-up time in omentectomy groups (P < 0.008). Wound contraction was greater for PPM (P < 0.0001) for all rats. RCM and UBS were similar to SIS invoking reduced inflammation, adhesion, and contraction compared to PPM. The fibrotic response to PPM was unique and more intense compared to CBMs. These CBM implants appear morphologically acceptable and warrant continued investigation.
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Hegde, Shashikanth; Rao, B H Sripathi; Kakar, Ravish Chander; Kakar, Ashish
2013-05-01
To evaluate the clinical relief from dentin hypersensitivity among subjects provided with a dentifrice formulated with 8% arginine, calcium carbonate and 1,000 ppm fluoride [sodium monofluorophosphate (MFP)] in comparison to those issued a commercially available dentifrice containing 1,000 ppm fluoride [as sodium monofluorophosphate (MFP)]. Clinical evaluations for hypersensitivity were performed with a novel tactile hypersensitivity measuring instrument--the Jay Sensitivity Sensor (Jay) Probe--in conjunction with evaporative triggers by air blast (Schiff scale) and Visual Analog Scores (VAS). Qualified adults from the Mangalore, India area who presented two teeth with dentin hypersensitivity were enrolled for this double-blind, randomized, parallel, controlled clinical trial conducted in an outpatient clinical setting. At baseline, dentin hypersensitivity was evaluated by the Jay Probe (tactile), air blast and VAS methods. Subjects were randomly issued a study dentifrice and instructed to brush their teeth for 1 minute twice daily with the provided dentifrice. Clinical evaluations for hypersensitivity were repeated after 2, 4 and 8 weeks of product use. 86 subjects (35 males and 51 females) complied with the study protocol and completed the entire study. At each recall visit, both treatment groups demonstrated significant reductions in dentin hypersensitivity from their corresponding baselines (P < 0.05). Subjects assigned the 8% arginine, calcium carbonate and 1,000 ppm fluoride dentifrice demonstrated statistically significant reductions in responses to tactile stimuli, air blast, and VAS responses in comparison to those using the dentifrice containing 1,000 ppm fluoride after 2, 4, and 8 weeks, respectively.
1986-02-01
Orlando. FL 3000 (305)281-S000 3S00( 6 ) eodmeter. Inc. Geodimeter 14-A 6000 t(S mm # 3 ppm) 511.300 Novato, CA 8000 (41S)677.12S6 15000 Ghodimeter 112...1.2 2.2 3.0 1 t(5mm + 5 ppm) t2" $19.950 Novato. CA 1.8 3.0 4.0 3 (415)883-2367 2.4 3.8 5.5 6 3.6 4.8 6.0 8 i-r Instrumnts EI/DM503 Automatic 1.5 2.5...POSITIONING LIMITATIONS FOR DREDGED MATERIAL DISPOSAL 4 BARGE MANEUVERABILITY 4 LIMITATIONS OF POSITIONING METHODS 5 Accuracy and Error 6 Site-Related
The Anomalous Magnetic Moment of the Muon
NASA Astrophysics Data System (ADS)
Hughes, Vernon W.; Sichtermann, Ernst P.
2002-12-01
A precise measurement of the anomalous g value, a = (g - 2)/2, for the positive muon has been made at the Brookhaven Alternating Gradient Synchrotron. The result a
NASA Astrophysics Data System (ADS)
Molaro, P.; Centurión, M.; Whitmore, J. B.; Evans, T. M.; Murphy, M. T.; Agafonova, I. I.; Bonifacio, P.; D'Odorico, S.; Levshakov, S. A.; Lopez, S.; Martins, C. J. A. P.; Petitjean, P.; Rahmani, H.; Reimers, D.; Srianand, R.; Vladilo, G.; Wendt, M.
2013-07-01
Context. Absorption-line systems detected in quasar spectra can be used to compare the value of the fine-structure constant, α, measured today on Earth with its value in distant galaxies. In recent years, some evidence has emerged of small temporal and also spatial variations in α on cosmological scales. These variations may reach a fractional level of ≈ 10 ppm (parts per million). Aims: To test these claims we are conducting a Large Program of observations with the Very Large Telescope's Ultraviolet and Visual Echelle Spectrograph (UVES), and are obtaining high-resolution (R ≈ 60 000) and high signal-to-noise ratio (S/N ≈ 100) UVES spectra calibrated specifically for this purpose. Here we analyse the first complete quasar spectrum from this programme, that of HE 2217-2818. Methods: We applied the many multiplet method to measure α in five absorption systems towards this quasar: zabs = 0.7866, 0.9424, 1.5558, 1.6279 , and 1.6919. Results: The most precise result is obtained for the absorber at zabs = 1.6919 where 3 Fe ii transitions and Al ii λ1670 have high S/N and provide a wide range of sensitivities to α. The absorption profile is complex with several very narrow features, and it requires 32 velocity components to be fitted to the data. We also conducted a range of tests to estimate the systematic error budget. Our final result for the relative variation in α in this system is Δα/α = +1.3 ± 2.4stat ± 1.0sys ppm. This is one of the tightest current bounds on α-variation from an individual absorber. A second, separate approach to the data reduction, calibration, and analysis of this system yielded a slightly different result of -3.8 ppm, possibly suggesting a larger systematic error component than our tests indicated. This approach used an additional 3 Fe ii transitions, parts of which were masked due to contamination by telluric features. Restricting this analysis to the Fe ii transitions alone and using a modified absorption profile model gave a result that is consistent with the first approach, Δα/α = +1.1 ± 2.6stat ppm. The four other absorbers have simpler absorption profiles, with fewer and broader features, and offer transitions with a narrower range of sensitivities to α. They therefore provide looser bounds on Δα/α at the ≳10 ppm precision level. Conclusions: The absorbers towards quasar HE 2217-2818 reveal no evidence of any variation in α at the 3-ppm precision level (1σ confidence). If the recently reported 10-ppm dipolar variation in α across the sky is correct, the expectation at this sky position is (3.2-5.4) ± 1.7 ppm depending on dipole model used. Our constraint of Δα/α = +1.3 ± 2.4stat ± 1.0sys ppm is not inconsistent with this expectation. Based on observations taken at ESO Paranal Observatory. Program L 185.A-0745Tables 4-8 are available in electronic form at http://www.aanda.org
Reproduction and organochlorine contaminants in terns at San Diego Bay
Ohlendorf, H.M.; Schaffner, F.C.; Custer, T.W.; Stafford, C.J.
1985-01-01
In 1981, we studied Caspian Terns (Sterna caspia) and Elegant Terns (S. elegans) nesting at the south end of San Diego Bay, California. Randomly collected Caspian Tern eggs contained signficantly (P < 0.05) higher mean concentrations of DDE (9.30 ppm) than did Elegant Tern eggs (3.79 ppm). DDE may have had an adverse effect on Caspian Tern reproduction but the relationship between hatching success and DDE concentration was not clear. We found an unusually high incidence of chicks (4.6%) that died in hatching. Caspian Tern eggs that broke during incubation or contained chicks that died while hatching had shells that were significantly (P < 0.05) thinner than eggs collected before 1947, and DDE was associated with reductions in shell thickness index (i.e., lowered eggshell density). Fish brought to Caspian Tern chicks contained up to 3.0 ppm DDE and 1.1 ppm PCBs. Organochlorine concentration brains of terns found dead were not high enough to suggest such poisoning as a cause of death.
NASA Technical Reports Server (NTRS)
Usry, J. W.; Witte, W. G.; Whitlock, C. H.; Gurganus, E. A.
1979-01-01
Experimental measurements were made of upwelled spectral signatures of various concentrations of industrial waste products mixed with water in a large water tank. Radiance and reflectance spectra for a biosolid waste product (sludge) mixed with conditioned tap water and natural river water are reported. Results of these experiments indicate that reflectance increases with increasing concentration of the sludge at practically all wavelengths for concentration of total suspended solids up to 117 ppm in conditioned tap water and 171 ppm in natural river water. Significant variations in the spectra were observed and may be useful in defining spectral characteristics for this waste product. No significant spectral differences were apparent in the reflectance spectra of the two experiments, especially for wavelengths greater than 540 nm. Reflectance values, however, were generally greater in natural river water for wavelengths greater than 540 nm. Reflectance may be considered to increase linearly with concentration of total suspended solids from 5 to 171 ppm at all wavelengths without introducing errors larger than 10 percent.
NASA Astrophysics Data System (ADS)
Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.
2014-02-01
An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier Transform Spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information is not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs are typically within 60 m of those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (Collision-Induced Absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC, CONTRAIL and HIPPO, yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS dataset is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.5 ± 0.7 ppm yr-1, in agreement with the currently accepted global growth rate based on ground-based measurements.
Munigunti, Ranjith; Nelson, Nicholas; Mulabagal, Vanisree; Gupta, Mahabir P; Brun, Reto; Calderón, Angela I
2011-10-01
Our current research on applications of mass spectrometry to natural product drug discovery against malaria aims to screen plant extracts for new ligands to Plasmodium falciparum thioredoxin reductase (PfTrxR) followed by their identification and structure elucidation. PfTrxR is involved in the antioxidant defense and redox regulation of the parasite and is validated as a promising target for therapeutic intervention against malaria. In the present study, detannified methanol extracts from Guatteria recurvisepala, Licania kallunkiae, and Topobea watsonii were screened for ligands to PfTrxR using ultrafiltration and liquid chromatography/mass spectrometry-based binding experiments. The PfTrxR ligand identified in the extract of Guatteria recurvisepala displayed a relative binding affinity of 3.5-fold when incubated with 1 μM PfTrxR. The ligand corresponding to the protonated molecule m/z 282.2792 [M+ H]+ was eluted at a retention time of 17.95 min in a 20-min gradient of 95% B consisting of (A) 0.1%formic acid in 95% H₂O-5% ACN, and (B) 0.1% formic acid in 95% ACN-5% H₂O in an LC-QTOF-MS.Tandem MS of the protonated molecule m/z 282.2792 [M + H]+, C₁₈H₃₆NO (DBE: 2; error: 1.13 ppm) resulted in two daughter ions m/z 265.2516[M + H-NH₃]+ (DBE: 3; error: 0.35 ppm) and m/z 247.2405 [M + H-NH₃-H₂O] +, (DBE: 4; error:2.26 ppm). The PfTrxR ligand was identified as oleamide and confirmed by comparison of the retention time, molecular formula, accurate mass,and double bond equivalence with the standard oleamide. This is the first report on the identification of oleamide as a PfTrxR ligand from Guatteria recurvisepala R. E. Fr. and the corresponding in vitro activity against P. falciparum strain K1 (IC₅₀ 4.29 μg/mL). © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Jung, Yeonjin; Kim, Jhoon; Kim, Woogyung; Boesch, Hartmut; Goo, Tae-Young; Cho, Chunho
2017-04-01
Although several CO2 retrieval algorithms have been developed to improve our understanding about carbon cycle, limitations in spatial coverage and uncertainties due to aerosols and thin cirrus clouds are still remained as a problem for monitoring CO2 concentration globally. Based on an optimal estimation method, the Yonsei CArbon Retrieval (YCAR) algorithm was developed to retrieve the column-averaged dry-air mole fraction of carbon dioxide (XCO2) using the Greenhouse Gases Observing SATellite (GOSAT) measurements with optimized a priori CO2 profiles and aerosol models over East Asia. In previous studies, the aerosol optical properties (AOP) are the most important factors in CO2 retrievals since AOPs are assumed as fixed parameters during retrieval process, resulting in significant XCO2 retrieval error up to 2.5 ppm. In this study, to reduce these errors caused by inaccurate aerosol optical information, the YCAR algorithm improved with taking into account aerosol optical properties as well as aerosol vertical distribution simultaneously. The CO2 retrievals with two difference aerosol approaches have been analyzed using the GOSAT spectra and have been evaluated throughout the comparison with collocated ground-based observations at several Total Carbon Column Observing Network (TCCON) sites. The improved YCAR algorithm has biases of 0.59±0.48 ppm and 2.16±0.87 ppm at Saga and Tsukuba sites, respectively, with smaller biases and higher correlation coefficients compared to the GOSAT operational algorithm. In addition, the XCO2 retrievals will be validated at other TCCON sites and error analysis will be evaluated. These results reveal that considering better aerosol information can improve the accuracy of CO2 retrieval algorithm and provide more useful XCO2 information with reduced uncertainties. This study would be expected to provide useful information in estimating carbon sources and sinks.
A ventilation intervention study in classrooms to improve indoor air quality: the FRESH study.
Rosbach, Jeannette T M; Vonk, Machiel; Duijm, Frans; van Ginkel, Jan T; Gehring, Ulrike; Brunekreef, Bert
2013-12-17
Classroom ventilation rates often do not meet building standards, although it is considered to be important to improve indoor air quality. Poor indoor air quality is thought to influence both children's health and performance. Poor ventilation in The Netherlands most often occurs in the heating season. To improve classroom ventilation a tailor made mechanical ventilation device was developed to improve outdoor air supply. This paper studies the effect of this intervention. The FRESH study (Forced-ventilation Related Environmental School Health) was designed to investigate the effect of a CO2 controlled mechanical ventilation intervention on classroom CO2 levels using a longitudinal cross-over design. Target CO2 concentrations were 800 and 1200 parts per million (ppm), respectively. The study included 18 classrooms from 17 schools from the north-eastern part of The Netherlands, 12 experimental classrooms and 6 control classrooms. Data on indoor levels of CO2, temperature and relative humidity were collected during three consecutive weeks per school during the heating seasons of 2010-2012. Associations between the intervention and weekly average indoor CO2 levels, classroom temperature and relative humidity were assessed by means of mixed models with random school-effects. At baseline, mean CO2 concentration for all schools was 1335 ppm (range: 763-2000 ppm). The intervention was able to significantly decrease CO2 levels in the intervention classrooms (F (2,10) = 17.59, p < 0.001), with a mean decrease of 491 ppm. With the target set at 800 ppm, mean CO2 was 841 ppm (range: 743-925 ppm); with the target set at 1200 ppm, mean CO2 was 975 ppm (range: 887-1077 ppm). Although the device was not capable of precisely achieving the two predefined levels of CO2, our study showed that classroom CO2 levels can be reduced by intervening on classroom ventilation using a CO2 controlled mechanical ventilation system.
Some practical problems in implementing randomization.
Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet
2010-06-01
While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.
NASA Technical Reports Server (NTRS)
Blucker, T. J.; Ferry, W. W.
1971-01-01
An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
Takeshita, Eliana M; Danelon, Marcelle; Castro, Luciene P; Cunha, Robson F; Delbem, Alberto C B
2016-01-01
To evaluate the effect of a low-fluoride (F) toothpaste supplemented with sodium trimetaphosphate (TMP) on enamel remineralization in situ. Bovine enamel blocks were selected on the basis of their surface hardness (SH) after caries-like lesions had been induced, and randomly divided into 4 treatment groups, according to the toothpastes used: without F or TMP (placebo); 500 ppm F; 500 ppm F plus 1% TMP; and 1,100 ppm F. The study design was blinded and crossover and performed in 4 phases of 3 days each. Eleven subjects used palatal appliances containing 4 bovine enamel blocks which were treated 3 times per day during 1 min each time, with natural slurries of saliva and toothpaste formed in the oral cavity during toothbrushing. After each phase, the percentages of surface (%SHR) and subsurface hardness recovery (%ΔKHNR) were calculated. F, calcium (Ca), and phosphorus (Pi) contents in enamel were also determined. Data were analyzed by 1-way, repeated-measures ANOVA, followed by the Student-Newman-Keuls test (p < 0.05). Toothpaste with 500 ppm F + TMP and 1,100 ppm F showed similar %SHR and %ΔKHNR as well as enamel F, Ca, and Pi concentrations. The addition of TMP to a low-fluoride toothpaste promoted a similar remineralizing capacity to that of a standard (1,100 ppm F) toothpaste in situ. © 2016 S. Karger AG, Basel.
Effectiveness of two nitrous oxide scavenging nasal hoods during routine pediatric dental treatment.
Chrysikopoulou, Aikaterini; Matheson, Pamela; Milles, Maano; Shey, Zia; Houpt, Milton
2006-01-01
This study compared the effectiveness of 2 nasal hoods (Porter/Brown and Accutron) in reducing waste nitrous oxide gas during conscious sedation for routine pediatric dental treatment. Thirty children, ages 3 to 8 years (mean=5.4 +/- 1.2 years), participated in this study. Fifteen randomly selected children started with the Porter/Brown mask, which was then switched to the Accutron mask, and the other 15 children used the reverse order of masks. Four measurements of ambient nitrous oxide were recorded with a Miran 205B Portable Ambient Air Analyzer 5 minutes after each of the following: (1) administration of nitrous oxide; (2) placement of the rubber dam; (3) change of the nasal hood; and (4) reduction of the vacuum. Samples were taken 8 inches above the nose of the patient and in the room 5 feet away from the patient. Nitrous oxide levels were significantly lower (P<.05) with the Porter/Brown system (31 +/- 40 ppm for the patient and 8 +/- 10 ppm for the room) compared with the Accutron system (375 +/- 94 ppm for the patient and 101 +/- 37 ppm for the room). When the suction was reduced, there was an increase in nitrous oxide levels with the Porter/Brown nasal hood (169 +/- 112 ppm for the patient and 28 +/- 18 ppm for the room), whereas the levels with the Accutron nasal hood remained high (368 +/- 107 ppm for the patient and 121 +/- 50 ppm for the room). This study demonstrated that removal of waste nitrous oxide was greater with the Porter/Brown device and that recommended suction levels must be used for optimum effectiveness.
Bondu, Joseph Dian; Selvakumar, R; Fleming, Jude Joseph
2018-01-01
A variety of methods, including the Ion Selective Electrode (ISE), have been used for estimation of fluoride levels in drinking water. But as these methods suffer many drawbacks, the newer method of IC has replaced many of these methods. The study aimed at (1) validating IC for estimation of fluoride levels in drinking water and (2) to assess drinking water fluoride levels of villages in and around Vellore district using IC. Forty nine paired drinking water samples were measured using ISE and IC method (Metrohm). Water samples from 165 randomly selected villages in and around Vellore district were collected for fluoride estimation over 1 year. Standardization of IC method showed good within run precision, linearity and coefficient of variance with correlation coefficient R 2 = 0.998. The limit of detection was 0.027 ppm and limit of quantification was 0.083 ppm. Among 165 villages, 46.1% of the villages recorded water fluoride levels >1.00 ppm from which 19.4% had levels ranging from 1 to 1.5 ppm, 10.9% had recorded levels 1.5-2 ppm and about 12.7% had levels of 2.0-3.0 ppm. Three percent of villages had more than 3.0 ppm fluoride in the water tested. Most (44.42%) of these villages belonged to Jolarpet taluk with moderate to high (0.86-3.56 ppm) water fluoride levels. Ion Chromatography method has been validated and is therefore a reliable method in assessment of fluoride levels in the drinking water. While the residents of Jolarpet taluk (Vellore distict) are found to be at a high risk of developing dental and skeletal fluorosis.
Kakar, A; Kakar, K; Sreenivasan, P K; DeVizio, W; Kohli, R
2012-01-01
This clinical study evaluated relief from dentin hypersensitivity among subjects who brushed their teeth with a new dentifrice containing 8.0% arginine, calcium carbonate, and 1000 ppm fluoride as sodium monofluorophosphate (MFP) to subjects who brushed with a commercially available dentifrice containing 1000 ppm MFP over an eight-week period. Adult subjects from the New Delhi, India area, with two teeth that exhibited dentin hypersensitivity, both to tactile stimulation using the Yeaple Probe and to stimulation using an air blast delivered by a standard dental unit syringe, were screened for study enrollment. Qualifying subjects were randomly assigned one of the study dentifrices and instructed to brush their teeth for one minute, twice daily (morning and evening) with the provided dentifrice. Follow-up examinations for dentin hypersensitivity were conducted after two, four, and eight weeks of product use. Subjects provided with the new dentifrice containing 8.0% arginine, calcium carbonate, and 1000 ppm MFP exhibited statistically significantly (p < 0.05) greater reductions in dentin hypersensitivity in response to tactile (81.9%, 90.5%, and 116.7%) and air blast (39.5%, 56.7%, and 76.7%) stimuli than subjects assigned the 1000 ppm MFP dentifrice after two, four, and eight weeks, respectively. The use of a new dentifrice containing 8.0% arginine, calcium carbonate, and 1000 ppm MFP provides superior efficacy in reducing dentin hypersensitivity (p < 0.05) than a control dentifrice containing 1000 ppm MFP alone after two, four, and eight weeks of use.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Methods for recalibration of mass spectrometry data
Tolmachev, Aleksey V [Richland, WA; Smith, Richard D [Richland, WA
2009-03-03
Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.
Determination of fluoride in water - A modified zirconium-alizarin method
Lamar, W.L.
1945-01-01
A convenient, rapid colorimetric procedure using the zirconium-alizarin indicator acidified with sulfuric acid for the determination of fluoride in water is described. Since this acid indicator is stable indefinitely, it is more useful than other zirconium-alizarin reagents previously reported. The use of sulfuric acid alone in acidifying the zirconium-alizarin reagent makes possible the maximum suppression of the interference of sulfate. Control of the pH of the samples eliminates errors due to the alkalinity of the samples. The fluoride content of waters containing less than 500 parts per million of sulfate and less than 1000 p.p.m. of chloride may be determined within a limit of 0.1 p.p.m. when a 100-ml. sample is used.
NASA Technical Reports Server (NTRS)
Wang, James S.; Kawa, S. Randolph; Eluszkiewicz, Janusz; Collatz, G. J.; Mountain, Marikate; Henderson, John; Nehrkorn, Thomas; Aschbrenner, Ryan; Zaccheo, T. Scott
2012-01-01
Knowledge of the spatiotemporal variations in emissions and uptake of CO2 is hampered by sparse measurements. The recent advent of satellite measurements of CO2 concentrations is increasing the density of measurements, and the future mission ASCENDS (Active Sensing of CO2 Emissions over Nights, Days and Seasons) will provide even greater coverage and precision. Lagrangian atmospheric transport models run backward in time can quantify surface influences ("footprints") of diverse measurement platforms and are particularly well suited for inverse estimation of regional surface CO2 fluxes at high resolution based on satellite observations. We utilize the STILT Lagrangian particle dispersion model, driven by WRF meteorological fields at 40-km resolution, in a Bayesian synthesis inversion approach to quantify the ability of ASCENDS column CO2 observations to constrain fluxes at high resolution. This study focuses on land-based biospheric fluxes, whose uncertainties are especially large, in a domain encompassing North America. We present results based on realistic input fields for 2007. Pseudo-observation random errors are estimated from backscatter and optical depth measured by the CALIPSO satellite. We estimate a priori flux uncertainties based on output from the CASA-GFED (v.3) biosphere model and make simple assumptions about spatial and temporal error correlations. WRF-STILT footprints are convolved with candidate vertical weighting functions for ASCENDS. We find that at a horizontal flux resolution of 1 degree x 1 degree, ASCENDS observations are potentially able to reduce average weekly flux uncertainties by 0-8% in July, and 0-0.5% in January (assuming an error of 0.5 ppm at the Railroad Valley reference site). Aggregated to coarser resolutions, e.g. 5 degrees x 5 degrees, the uncertainty reductions are larger and more similar to those estimated in previous satellite data observing system simulation experiments.
Improving the power efficiency of SOA-based UWB over fiber systems via pulse shape randomization
NASA Astrophysics Data System (ADS)
Taki, H.; Azou, S.; Hamie, A.; Al Housseini, A.; Alaeddine, A.; Sharaiha, A.
2016-09-01
A simple pulse shape randomization scheme is considered in this paper for improving the performance of ultra wide band (UWB) communication systems using On Off Keying (OOK) or pulse position modulation (PPM) formats. The advantage of the proposed scheme, which can be either employed for impulse radio (IR) or for carrier-based systems, is first theoretically studied based on closed-form derivations of power spectral densities. Then, we investigate an application to an IR-UWB over optical fiber system, by utilizing the 4th and 5th orders of Gaussian derivatives. Our approach proves to be effective for 1 Gbps-PPM and 2 Gbps-OOK transmissions, with an advantage in terms of power efficiency for short distances. We also examine the performance for a system employing an in-line Semiconductor Optical Amplifier (SOA) with the view to achieve a reach extension, while limiting the cost and system complexity.
Ehrlich, Shelley; Smith, Kristen; Williams, Paige L.; Chavarro, Jorge E.; Batsis, Maria; Toth, Thomas L.; Hauser, Russ
2015-01-01
Total hair mercury (Hg) was measured among 205 women undergoing in vitro fertilization (IVF) treatment and the association with prospectively collected IVF outcomes (229 IVF cycles) was evaluated. Hair Hg levels (median=0.62 ppm, range: 0.03-5.66 ppm) correlated with fish intake (r=0.59), and exceeded the recommended EPA reference of 1ppm in 33% of women. Generalized linear mixed models with random intercepts accounting for within-woman correlations across treatment cycles were used to evaluate the association of hair Hg with IVF outcomes adjusted for age, body mass index, race, smoking status, infertility diagnosis, and protocol type. Hair Hg levels were not related to ovarian stimulation outcomes (peak estradiol levels, total and mature oocyte yields) or to fertilization rate, embryo quality, clinical pregnancy rate or live birth rate. PMID:25601638
Arcisauskaite, Vaida; Melo, Juan I; Hemmingsen, Lars; Sauer, Stephan P A
2011-07-28
We investigate the importance of relativistic effects on NMR shielding constants and chemical shifts of linear HgL(2) (L = Cl, Br, I, CH(3)) compounds using three different relativistic methods: the fully relativistic four-component approach and the two-component approximations, linear response elimination of small component (LR-ESC) and zeroth-order regular approximation (ZORA). LR-ESC reproduces successfully the four-component results for the C shielding constant in Hg(CH(3))(2) within 6 ppm, but fails to reproduce the Hg shielding constants and chemical shifts. The latter is mainly due to an underestimation of the change in spin-orbit contribution. Even though ZORA underestimates the absolute Hg NMR shielding constants by ∼2100 ppm, the differences between Hg chemical shift values obtained using ZORA and the four-component approach without spin-density contribution to the exchange-correlation (XC) kernel are less than 60 ppm for all compounds using three different functionals, BP86, B3LYP, and PBE0. However, larger deviations (up to 366 ppm) occur for Hg chemical shifts in HgBr(2) and HgI(2) when ZORA results are compared with four-component calculations with non-collinear spin-density contribution to the XC kernel. For the ZORA calculations it is necessary to use large basis sets (QZ4P) and the TZ2P basis set may give errors of ∼500 ppm for the Hg chemical shifts, despite deceivingly good agreement with experimental data. A Gaussian nucleus model for the Coulomb potential reduces the Hg shielding constants by ∼100-500 ppm and the Hg chemical shifts by 1-143 ppm compared to the point nucleus model depending on the atomic number Z of the coordinating atom and the level of theory. The effect on the shielding constants of the lighter nuclei (C, Cl, Br, I) is, however, negligible. © 2011 American Institute of Physics
Comparison of Oral Reading Errors between Contextual Sentences and Random Words among Schoolchildren
ERIC Educational Resources Information Center
Khalid, Nursyairah Mohd; Buari, Noor Halilah; Chen, Ai-Hong
2017-01-01
This paper compares the oral reading errors between the contextual sentences and random words among schoolchildren. Two sets of reading materials were developed to test the oral reading errors in 30 schoolchildren (10.00±1.44 years). Set A was comprised contextual sentences while Set B encompassed random words. The schoolchildren were asked to…
Farràs, Marta; Fernández-Castillejo, Sara; Rubió, Laura; Arranz, Sara; Catalán, Úrsula; Subirana, Isaac; Romero, Mari-Paz; Castañer, Olga; Pedret, Anna; Blanchart, Gemma; Muñoz-Aguayo, Daniel; Schröder, Helmut; Covas, Maria-Isabel; de la Torre, Rafael; Motilva, Maria-José; Solà, Rosa; Fitó, Montserrat
2018-01-01
At present, high-density lipoprotein (HDL) function is thought to be more relevant than HDL cholesterol quantity. Consumption of olive oil phenolic compounds (PCs) has beneficial effects on HDL-related markers. Enriched food with complementary antioxidants could be a suitable option to obtain additional protective effects. Our aim was to ascertain whether virgin olive oils (VOOs) enriched with (a) their own PC (FVOO) and (b) their own PC plus complementary ones from thyme (FVOOT) could improve HDL status and function. Thirty-three hypercholesterolemic individuals ingested (25 ml/day, 3 weeks) (a) VOO (80 ppm), (b) FVOO (500 ppm) and (c) FVOOT (500 ppm) in a randomized, double-blind, controlled, crossover trial. A rise in HDL antioxidant compounds was observed after both functional olive oil interventions. Nevertheless, α-tocopherol, the main HDL antioxidant, was only augmented after FVOOT versus its baseline. In conclusion, long-term consumption of phenol-enriched olive oils induced a better HDL antioxidant content, the complementary phenol-enriched olive oil being the one which increased the main HDL antioxidant, α-tocopherol. Complementary phenol-enriched olive oil could be a useful dietary tool for improving HDL richness in antioxidants. Copyright © 2017. Published by Elsevier Inc.
Random measurement error: Why worry? An example of cardiovascular risk factors.
Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H
2018-01-01
With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.
APF and dentifrice effect on root dentin demineralization and biofilm.
Vale, G C; Tabchoury, C P M; Del Bel Cury, A A; Tenuta, L M A; ten Cate, J M; Cury, J A
2011-01-01
Because dentin is more caries-susceptible than enamel, its demineralization may be more influenced by additional fluoride (F). We hypothesized that a combination of professional F, applied as acidulated phosphate F (APF), and use of 1100-ppm-F dentifrice would provide additional protection for dentin compared with 1100-ppm-F alone. Twelve adult volunteers wore palatal appliances containing root dentin slabs, which were subjected, during 4 experimental phases of 7 days each, to biofilm accumulation and sucrose exposure 8x/day. The volunteers were randomly assigned to the following treatments: placebo dentifrice (PD), 1100-ppm-F dentifrice (FD), APF + PD, and APF+FD. APF gel (1.23% F) was applied to the slabs once at the beginning of the experimental phase, and the dentifrices were used 3x/day. APF and FD increased F concentration in biofilm fluid and reduced root dentin demineralization, presenting an additive effect. Analysis of the data suggests that the combination of APF gel application and daily regular use of 1100-ppm-F dentifrice may provide additional protection against root caries compared with the dentifrice alone.
An eMERGE Clinical Center at Partners Personalized Medicine
Smoller, Jordan W.; Karlson, Elizabeth W.; Green, Robert C.; Kathiresan, Sekar; MacArthur, Daniel G.; Talkowski, Michael E.; Murphy, Shawn N.; Weiss, Scott T.
2016-01-01
The integration of electronic medical records (EMRs) and genomic research has become a major component of efforts to advance personalized and precision medicine. The Electronic Medical Records and Genomics (eMERGE) network, initiated in 2007, is an NIH-funded consortium devoted to genomic discovery and implementation research by leveraging biorepositories linked to EMRs. In its most recent phase, eMERGE III, the network is focused on facilitating implementation of genomic medicine by detecting and disclosing rare pathogenic variants in clinically relevant genes. Partners Personalized Medicine (PPM) is a center dedicated to translating personalized medicine into clinical practice within Partners HealthCare. One component of the PPM is the Partners Healthcare Biobank, a biorepository comprising broadly consented DNA samples linked to the Partners longitudinal EMR. In 2015, PPM joined the eMERGE Phase III network. Here we describe the elements of the eMERGE clinical center at PPM, including plans for genomic discovery using EMR phenotypes, evaluation of rare variant penetrance and pleiotropy, and a novel randomized trial of the impact of returning genetic results to patients and clinicians. PMID:26805891
An eMERGE Clinical Center at Partners Personalized Medicine.
Smoller, Jordan W; Karlson, Elizabeth W; Green, Robert C; Kathiresan, Sekar; MacArthur, Daniel G; Talkowski, Michael E; Murphy, Shawn N; Weiss, Scott T
2016-01-20
The integration of electronic medical records (EMRs) and genomic research has become a major component of efforts to advance personalized and precision medicine. The Electronic Medical Records and Genomics (eMERGE) network, initiated in 2007, is an NIH-funded consortium devoted to genomic discovery and implementation research by leveraging biorepositories linked to EMRs. In its most recent phase, eMERGE III, the network is focused on facilitating implementation of genomic medicine by detecting and disclosing rare pathogenic variants in clinically relevant genes. Partners Personalized Medicine (PPM) is a center dedicated to translating personalized medicine into clinical practice within Partners HealthCare. One component of the PPM is the Partners Healthcare Biobank, a biorepository comprising broadly consented DNA samples linked to the Partners longitudinal EMR. In 2015, PPM joined the eMERGE Phase III network. Here we describe the elements of the eMERGE clinical center at PPM, including plans for genomic discovery using EMR phenotypes, evaluation of rare variant penetrance and pleiotropy, and a novel randomized trial of the impact of returning genetic results to patients and clinicians.
Nanosilver effects on growth parameters in experimental aflatoxicosis in broiler chickens.
Gholami-Ahangaran, Majid; Zia-Jahromi, Noosha
2013-03-01
Aflatoxicosis is a cause of economic losses in broiler production. In this study, the effect of one commercial nanocompound, Nanocid (Nano Nasb Pars Co., Iran) was evaluated in reduction of aflatoxin effects on the growth and performance indices in broiler chickens suffering from experimental aflatoxicosis. For this, a total of 300 one-day-old broiler chicks (Ross strain) were randomly divided into 4 groups with 3 replicates of 15 chicks in each separated pen during the 28-day experiment. Treatment groups including group A: chickens fed basal diet, group B: chickens fed 3 ppm productive aflatoxin in basal diet, group C: chickens fed basal diet plus 2500 ppm Nanocid, and group D: chickens fed 3 ppm productive aflatoxin and 2500 ppm Nanocid, in basal diet. Data on body weight, body weight gain (BWG), feed intake, and feed conversion ratio (FCR) were recorded at weekly intervals. Also cumulative data were assessed. Results showed, although supplement of Nanocid to conventional diet had no effect on performance but addition of Nanocid to diet containing 3 ppm aflatoxin increased significantly the cumulative BWG, cumulative feed consumption and decreased FCR in the last 2 weeks of experimental period. The improvement in these performance indices by supplement of Nanocid to diet containing aflatoxin showed the ability of Nanocid to diminish the inhibitory effects of aflatoxin.
The Calibration System of the E989 Experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasi, Antonio
The muon anomaly aµ is one of the most precise quantity known in physics experimentally and theoretically. The high level of accuracy permits to use the measurement of aµ as a test of the Standard Model comparing with the theoretical calculation. After the impressive result obtained at Brookhaven National Laboratory in 2001 with a total accuracy of 0.54 ppm, a new experiment E989 is under construction at Fermilab, motivated by the diff of aexp SM µ - aµ ~ 3σ. The purpose of the E989 experiment is a fourfold reduction of the error, with a goal of 0.14 ppm,more » improving both the systematic and statistical uncertainty. With the use of the Fermilab beam complex a statistic of × 21 with respect to BNL will be reached in almost 2 years of data taking improving the statistical uncertainty to 0.1 ppm. Improvement on the systematic error involves the measurement technique of ωa and ωp, the anomalous precession frequency of the muon and the Larmor precession frequency of the proton respectively. The measurement of ωp involves the magnetic field measurement and improvements on this sector related to the uniformity of the field should reduce the systematic uncertainty with respect to BNL from 170 ppb to 70 ppb. A reduction from 180 ppb to 70 ppb is also required for the measurement of ωa; new DAQ, a faster electronics and new detectors and calibration system will be implemented with respect to E821 to reach this goal. In particular the laser calibration system will reduce the systematic error due to gain fl of the photodetectors from 0.12 to 0.02 ppm. The 0.02 ppm limit on systematic requires a system with a stability of 10 -4 on short time scale (700 µs) while on longer time scale the stability is at the percent level. The 10 -4 stability level required is almost an order of magnitude better than the existing laser calibration system in particle physics, making the calibration system a very challenging item. In addition to the high level of stability a particular environment, due to the presence of a 14 m diameter storage ring, a highly uniform magnetic field and the detector distribution around the storage ring, set specific guidelines and constraints. This thesis will focus on the final design of the Laser Calibration System developed for the E989 experiment. Chapter 1 introduces the subject of the anomalous magnetic moment of the muon; chapter 2 presents previous measurement of g -2, while chapter 3 discusses the Standard Model prediction and possible new physics scenario. Chapter 4 describes the E989 experiment. In this chapter will be described the experimental technique and also will be presented the experimental apparatus focusing on the improvements necessary to reduce the statistical and systematic errors. The main item of the thesis is discussed in the last two chapters: chapter 5 is focused on the Laser Calibration system while chapter 6 describes the Test Beam performed at the Beam Test Facility of Laboratori Nazionali di Frascati from the 29th February to the 7th March as a final test for the full calibrations system. An introduction explain the physics motivation of the system and the diff t devices implemented. In the final chapter the setup used will be described and some of the results obtained will be presented.« less
NASA Technical Reports Server (NTRS)
Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.
2012-01-01
This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.
40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures
Code of Federal Regulations, 2014 CFR
2014-07-01
... transmitters of an orifice-, nozzle-, or venturi-type fuel flowmeter under section 2.1.6 of appendix D to this... nozzle) of an orifice-, venturi-, or nozzle-type fuel flowmeter. Examples of the types of information to..., but ≤200 ppm). The out-of-control period begins upon failure of the calibration error test and ends...
Oguz, Ensar; Ersoy, Muhammed
2014-01-01
The effects of inlet cobalt(II) concentration (20-60 ppm), feed flow rate (8-19 ml/min) and bed height (5-15 cm), initial solution pH (3-5) and particle size (0.25
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
High-zinc rice as a breakthrough for high nutritional rice breeding program
NASA Astrophysics Data System (ADS)
Barokah, U.; Susanto, U.; Swamy, M.; Djoar, D. W.; Parjanto
2018-03-01
WHO reported climate change already takes 150,000 casualties annually, due to the emergence of various diseases and malnutrition caused by food shortages and disasters. Rice is the staple food for almost all of Indonesian citizens, therefore Zn biofortification on rice is expected to be effective, efficient, massive, and sustainable to overcome the Zn nutritional deficiency. This study aims to identify rice with high Zn content and yield for further effort in releasing this variety. Ten lines along with two varieties as a comparison (Ciherang and Inpari 5 Merawu) were tested in Plumbon Village, Mojolaban Subdistrict, Sukoharjo Regency during February-May 2017. The experiment was designed in a Randomized Completely Block Design with four replications on a 4 m x 5 m area, with 25 cm x 25 cm plant spacing using seedling transplanting techniques of 21 days old seeds. The results showed that the plant genotypes treated had differences in yield characteristics, heading date, harvest age, panicle number, filled and un-filled grain per panicle, seed set, 1000 grains weight, Zn and Iron (Fe) content in rice grain. B13884-MR-29-1-1 line (30.94 ppm Zn, 15.84 ppm Fe, 4.11 ton/ha yield) and IR 97477- 115-1-CRB-0-SKI-1-SKI-0-2 (29.61 ppm Zn, 13.49 ppm Zn, 4.4 ton/ha yield) are prospective variety to be released. Ciherang had Zn content of 23.04 ppm, 11.93 ppm Fe, and yield of 4.07 t/ha.
Jerez-Timaure, Nancy; Rivero, Janeth Colina; Araque, Humberto; Jiménez, Paola; Velazco, Mariela; Colmenares, Ciolys
2011-03-01
Two experiments were conducted to evaluate the proximal composition, lipids and cholesterol content of meat from pigs fed diets with peach-palm meal (PPM), with or without addition of synthetic lysine (LYS). In experiment 1, 24 pigs were randomly allotted into six treatments with three levels of PPM (0.16 and 32%) and two levels of LYS (0 and 0.27%). In experiment II, 16 finishing pigs were fed with two levels of PPM (0 and 17.50%) and two levels of LYS (0 and 0.27%). At the end of each experiment (42 and 35 d, respectively), pigs were slaughtered and loin samples were obtained to determine crude protein, dry matter, moisture, ash, total lipids, and cholesterol content. In experiment I, pork loin from 16% PPM had more dry matter (26.45 g/100 g) and less moisture (73.49 g/100g) than pork loin from 32% PPM (25.11 y 75.03 g/100g, respectively). Meat samples from pigs without LYS had higher (p < 0.05) content of lipids (2.11 g/100 g) than meat from pigs that consumed LYS (1.72 g/100 g). In experiment II, the proximal, lipids and cholesterol content were similar among treatments. The PPM addition to pig diets did not affect the proximal composition of pork, while LYS addition indicated a reduction of total lipids, which could result as an alternative to obtain leaner meat.
Effect of nanosilver on blood parameters in chickens having aflatoxicosis.
Gholami-Ahangaran, Majid; Zia-Jahromi, Noosha
2014-03-01
This experiment is designed to investigate the positive effects of commercial nanosilver compound on blood parameters in experimental aflatoxicosis in broiler chickens. For this, 270 one-day-old broiler chickens were randomly divided into six treatment groups with three replicates. The experimental groups were group A: chickens fed with basal diet; group B: chickens fed with 3 ppm productive aflatoxin in basal diet; groups of C, D, E and F received Mycoad (2.5 g/kg diet), Mycoad (2.5 g/kg diet) + productive aflatoxin (3 ppm), Nanocid (2500 ppm), and Nanocid (2500 ppm) + productive aflatoxin (3 ppm) in basal diet, respectively. Results revealed that some of the blood parameters such as mean corpuscular volume, mean corpuscular hemoglobin, mean corpuscular hemoglobin concentration lymphocytes, neutrophils, basophils, monocytes, and eosinophils percentage were not affected in this experiment; whereas, hemoglobin percentage and white blood cell (WBC) count in all the groups fed with 3 ppm aflatoxin except nanocid + aflatoxin decreased significantly (p < 0.05). There are no significant differences between the groups that received nanocid + aflatoxin and mycoad + aflatoxin in hemoglobin percentage and WBC count parameters. The red blood cell count and hematocrit in chickens received aflatoxin were significantly lower than other groups (p < 0.05). Therefore, this study suggests that nanocid similar as mycoad can be useful in reducing the adverse effects of aflatoxin on blood parameters in chickens affected with aflatoxicosis.
A ventilation intervention study in classrooms to improve indoor air quality: the FRESH study
2013-01-01
Background Classroom ventilation rates often do not meet building standards, although it is considered to be important to improve indoor air quality. Poor indoor air quality is thought to influence both children’s health and performance. Poor ventilation in The Netherlands most often occurs in the heating season. To improve classroom ventilation a tailor made mechanical ventilation device was developed to improve outdoor air supply. This paper studies the effect of this intervention. Methods The FRESH study (Forced-ventilation Related Environmental School Health) was designed to investigate the effect of a CO2 controlled mechanical ventilation intervention on classroom CO2 levels using a longitudinal cross-over design. Target CO2 concentrations were 800 and 1200 parts per million (ppm), respectively. The study included 18 classrooms from 17 schools from the north-eastern part of The Netherlands, 12 experimental classrooms and 6 control classrooms. Data on indoor levels of CO2, temperature and relative humidity were collected during three consecutive weeks per school during the heating seasons of 2010–2012. Associations between the intervention and weekly average indoor CO2 levels, classroom temperature and relative humidity were assessed by means of mixed models with random school-effects. Results At baseline, mean CO2 concentration for all schools was 1335 ppm (range: 763–2000 ppm). The intervention was able to significantly decrease CO2 levels in the intervention classrooms (F (2,10) = 17.59, p < 0.001), with a mean decrease of 491 ppm. With the target set at 800 ppm, mean CO2 was 841 ppm (range: 743–925 ppm); with the target set at 1200 ppm, mean CO2 was 975 ppm (range: 887–1077 ppm). Conclusions Although the device was not capable of precisely achieving the two predefined levels of CO2, our study showed that classroom CO2 levels can be reduced by intervening on classroom ventilation using a CO2 controlled mechanical ventilation system. PMID:24345039
Ahmed, Ali M.; Hamed, Dalia M.; Elsharawy, Nagwa T.
2017-01-01
Aim: The main objectives of this study were for comparing the effect of batteries and deep litter rearing systems of domesticated Japanese quail, Coturnix coturnix japonica, on the concentration levels of cadmium, copper, lead, and zinc from the quail meat and offal in Ismailia, Egypt. Materials and Methods: A total of 40 quail meat and their offal samples were randomly collected from two main quail rearing systems: Battery (Group I) and deep litter system (Group II) for determination of concentration levels of cadmium, copper, lead, and zinc. In addition, 80 water and feed samples were randomly collected from water and feeders of both systems in the Food Hygiene Laboratory, Faculty of Veterinary Medicine, Suez Canal University for heavy metals determination. Results: The mean concentration levels of cadmium, copper, lead, and zinc in Group I were 0.010, 0.027, 1.137, and 0.516 ppm and for Group II were 0.093, 0.832, 0.601, and 1.651 ppm, respectively. The mean concentration levels of cadmium, copper, lead, and zinc in quail feed in Group I were 1.114, 1.606, 5.822, and 35.11 ppm and for Group II were 3.010, 2.576, 5.852, and 23.616 ppm, respectively. The mean concentration levels of cadmium, copper, lead, and zinc in quail meat for Group I were 0.058, 5.902, 10.244, and 290 ppm and for Group II were 0.086, 6.092, 0.136, and 1.280 ppm, respectively. The mean concentration levels of cadmium, copper, lead, and zinc for liver samples in Group I were 0.15, 8.32, 1.05, and 3.41 ppm and for Group II were 0.13, 8.88, 0.95, and 4.21 ppm, respectively. The mean concentration levels of cadmium, copper, lead, and zinc in kidney samples for the Group I were 0.24, 4.21, 1.96, and 4.03 ppm and for Group II were 0.20, 5.00, 1.56, and 3.78 ppm, respectively. Kidney had the highest concentration levels of heavy metals followed by liver then muscles. The highest concentration levels of copper were observed in liver samples. The order of the levels of these trace elements obtained from the four different quail organs is Ca > Pb > Zn > Cu. Lead and cadmium concentration levels in quail meat samples were exceeded the Egyptian standardization limits and suggesting a health threat from lead and cadmium to the quail consumers. Conclusion: Battery rearing system is more hygienic than deep litter system from the point of heavy metals pollution of water and feeds of quail. Feed samples from battery system had means concentration levels of lead not significantly higher (p>0.05) than those samples from deep litter system. Meanwhile, water samples from battery system had means concentration levels of cadmium, copper, and zinc significantly higher (p>0.05) than those samples from deep litter system. Quail may carry health risks to consumers. PMID:28344413
Ahmed, Ali M; Hamed, Dalia M; Elsharawy, Nagwa T
2017-02-01
The main objectives of this study were for comparing the effect of batteries and deep litter rearing systems of domesticated Japanese quail, Coturnix coturnix japonica , on the concentration levels of cadmium, copper, lead, and zinc from the quail meat and offal in Ismailia, Egypt. A total of 40 quail meat and their offal samples were randomly collected from two main quail rearing systems: Battery (Group I) and deep litter system (Group II) for determination of concentration levels of cadmium, copper, lead, and zinc. In addition, 80 water and feed samples were randomly collected from water and feeders of both systems in the Food Hygiene Laboratory, Faculty of Veterinary Medicine, Suez Canal University for heavy metals determination. The mean concentration levels of cadmium, copper, lead, and zinc in Group I were 0.010, 0.027, 1.137, and 0.516 ppm and for Group II were 0.093, 0.832, 0.601, and 1.651 ppm, respectively. The mean concentration levels of cadmium, copper, lead, and zinc in quail feed in Group I were 1.114, 1.606, 5.822, and 35.11 ppm and for Group II were 3.010, 2.576, 5.852, and 23.616 ppm, respectively. The mean concentration levels of cadmium, copper, lead, and zinc in quail meat for Group I were 0.058, 5.902, 10.244, and 290 ppm and for Group II were 0.086, 6.092, 0.136, and 1.280 ppm, respectively. The mean concentration levels of cadmium, copper, lead, and zinc for liver samples in Group I were 0.15, 8.32, 1.05, and 3.41 ppm and for Group II were 0.13, 8.88, 0.95, and 4.21 ppm, respectively. The mean concentration levels of cadmium, copper, lead, and zinc in kidney samples for the Group I were 0.24, 4.21, 1.96, and 4.03 ppm and for Group II were 0.20, 5.00, 1.56, and 3.78 ppm, respectively. Kidney had the highest concentration levels of heavy metals followed by liver then muscles. The highest concentration levels of copper were observed in liver samples. The order of the levels of these trace elements obtained from the four different quail organs is Ca > Pb > Zn > Cu. Lead and cadmium concentration levels in quail meat samples were exceeded the Egyptian standardization limits and suggesting a health threat from lead and cadmium to the quail consumers. Battery rearing system is more hygienic than deep litter system from the point of heavy metals pollution of water and feeds of quail. Feed samples from battery system had means concentration levels of lead not significantly higher (p>0.05) than those samples from deep litter system. Meanwhile, water samples from battery system had means concentration levels of cadmium, copper, and zinc significantly higher (p>0.05) than those samples from deep litter system. Quail may carry health risks to consumers.
NASA Technical Reports Server (NTRS)
Moore, J. T.
1985-01-01
Data input for the AVE-SESAME I experiment are utilized to describe the effects of random errors in rawinsonde data on the computation of ageostrophic winds. Computer-generated random errors for wind direction and speed and temperature are introduced into the station soundings at 25 mb intervals from which isentropic data sets are created. Except for the isallobaric and the local wind tendency, all winds are computed for Apr. 10, 1979 at 2000 GMT. Divergence fields reveal that the isallobaric and inertial-geostrophic-advective divergences are less affected by rawinsonde random errors than the divergence of the local wind tendency or inertial-advective winds.
Aedes aegypti larvicide from the ethanolic extract of Piper nigrum black peppercorns.
Santiago, Viviene S; Alvero, Rita Grace; Villaseñor, Irene M
2015-01-01
Due to unavailability of a vaccine and a specific cure to dengue, the focus nowadays is to develop an effective vector control method against the female Aedes aegypti mosquito. This study aims to determine the larvicidal fractions from Piper nigrum ethanolic extracts (PnPcmE) and to elucidate the identity of the bioactive compounds that comprise these larvicidal fractions. Larvicidal assay was performed by subjecting 3rd to 4th A. aegypti instar larvae to PnPcmE of P. nigrum. The PnPcmE exhibited potential larvicidal activity having an LC50 of 7.1246 ± 0.1304 ppm (mean ± Std error). Normal phase vacuum liquid chromatography of the PnPcmE was employed which resulted in five fractions, two of which showed larvicidal activity. The most active of the PnPcmE fractions is PnPcmE-1A, with an LC50 and LC90 of 1.7101 ± 0.0491 ppm and 3.7078 ppm, respectively. Subsequent purification of PnPcmE-1A allowed the identification of the larvicidal compound as oleic acid.
Active optics null test system based on a liquid crystal programmable spatial light modulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ares, Miguel; Royo, Santiago; Sergievskaya, Irina
2010-11-10
We present an active null test system adapted to test lenses and wavefronts with complex shapes and strong local deformations. This system provides greater flexibility than conventional static null tests that match only a precisely positioned, individual wavefront. The system is based on a cylindrical Shack-Hartmann wavefront sensor, a commercial liquid crystal programmable phase modulator (PPM), which acts as the active null corrector, enabling the compensation of large strokes with high fidelity in a single iteration, and a spatial filter to remove unmodulated light when steep phase changes are compensated. We have evaluated the PPM's phase response at 635 nmmore » and checked its performance by measuring its capability to generate different amounts of defocus aberration, finding root mean squared errors below {lambda}/18 for spherical wavefronts with peak-to-valley heights of up to 78.7{lambda}, which stands as the limit from which diffractive artifacts created by the PPM have been found to be critical under no spatial filtering. Results of a null test for a complex lens (an ophthalmic customized progressive addition lens) are presented and discussed.« less
NASA Astrophysics Data System (ADS)
Wu, Hao; Wang, Xianhua; Ye, Hanhan; Jiang, Yun; Duan, Fenghua
2018-01-01
We developed an algorithm (named GMI_XCO2) to retrieve the global column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) for greenhouse-gases monitor instrument (GMI) and directional polarized camera (DPC) on the GF-5 satellite. This algorithm is designed to work in cloudless atmospheric conditions with aerosol optical thickness (AOT)<0.3. To quantify the uncertainty level of the retrieved XCO2 when the aerosols and cirrus clouds occurred in retrieving XCO2 with the GMI short wave infrared (SWIR) data, we analyzed the errors rate caused by the six types of aerosols and cirrus clouds. The results indicated that in AOT range of 0.05 to 0.3 (550 nm), the uncertainties of aerosols could lead to errors of -0.27% to 0.59%, -0.32% to 1.43%, -0.10% to 0.49%, -0.12% to 1.17%, -0.35% to 0.49%, and -0.02% to -0.24% for rural, dust, clean continental, maritime, urban, and soot aerosols, respectively. The retrieval results presented a large error due to cirrus clouds. In the cirrus optical thickness range of 0.05 to 0.8 (500 nm), the most underestimation is up to 26.25% when the surface albedo is 0.05. The most overestimation is 8.1% when the surface albedo is 0.65. The retrieval results of GMI simulation data demonstrated that the accuracy of our algorithm is within 4 ppm (˜1%) using the simultaneous measurement of aerosols and clouds from DPC. Moreover, the speed of our algorithm is faster than full-physics (FP) methods. We verified our algorithm with Greenhouse-gases Observing Satellite (GOSAT) data in Beijing area during 2016. The retrieval errors of most observations are within 4 ppm except for summer. Compared with the results of GOSAT, the correlation coefficient is 0.55 for the whole year data, increasing to 0.62 after excluding the summer data.
Barlow, A P; Sufi, F; Mason, S C
2009-01-01
The objective of these three clinical in situ studies was to investigate the relative performance of commercially available and experimental dentifrice formulations, having different fluoride sources and excipient ingredients, at remineralizing a bovine enamel surface previously softened by a dietary acid challenge. Each study utilized the same randomized, placebo-controlled, single-blind, crossover design. Subjects undertook single brushings of their natural teeth, with an in situ appliance in place, using different dentifrices in a randomly assigned order. Study A involved 58 subjects with the following dentifrices: Sensodyne Pronamel (1450 ppm F as NaF/5% KNO3); Blend-a-Med Classic (1450 ppm F as NaF); and a matched (Pronamel) placebo control (0 ppm F). Study B involved 56 subjects with the following dentifrices: Sensodyne Pronamel (1150 ppm F as NaF/5% KNO3); Crest Cavity Protection (1100 ppm F as NaF); Crest Pro-Health (0.454% SnF2 [1100 ppm F]/sodium hexametaphosphate); and a matched (Pronamel) placebo control (0 ppm F). Study C involved 56 subjects with the following dentifrices: Sensodyne Pronamel (1150 ppm F as NaF/5% KNO3); Sensodyne Pronamel Gentle Whitening (1150 ppm F as NaF/5% KNO3); Colgate Sensitive Multi Protection (1000 ppm F as NaMFP/5.53% potassium citrate/2% zinc citrate); and a matched (Pronamel) placebo control (0 ppm F). Subjects wore their palatal appliances holding eight bovine enamel blocks, previously exposed for 25 minutes to an in vitro erosive challenge with grapefruit juice, for the duration of the experiment. Five minutes after appliance insertion, subjects undertook a supervised, 90-second brush/rinse regimen with their assigned dentifrice. Surface microhardness (SMH) of the specimens was determined prior to the erosive challenge (baseline), after the in vitro erosive challenge, and were remeasured after four hours in situ remineralization following the tooth brushing event. Finally, SMH values were determined after a second in vitro erosive challenge at the end of the in situ remineralization period. Statistical analyses included ANOVA and pair-wise comparisons between treatments, testing at a 5% significance level. All three studies demonstrated significantly greater percent surface microhardness recovery (% SMHr) and percent relative erosion resistance (% RER) for dentifrices containing sodium fluoride compared to placebo controls. Overall, significantly greater % SMHr (p < 0.0001) was observed for Sensodyne Pronamel compared to Blend-a-Med Classic, Crest Pro-Health, and Colgate Sensitive Multi Protection dentifrices. Similarly, Sensodyne Pronamel delivered directionally better % RER vs. Blend-a-Med Classic (p = 0.0731), and significantly higher % RER vs. Crest Pro-Health (p = 0.0074) and Colgate Sensitive Multi Protection (p <0.0001). Crest Cavity Protection demonstrated significantly better % RER (p = 0.031) than Crest Pro-Health, which in turn demonstrated significantly better % RER than the placebo control (p < 0.0001). No other statistically significant between-product comparisons were observed. The results of these in situ studies support the effectiveness of dentifrices containing sodium fluoride to reharden enamel previously softened with an erosive challenge. Furthermore, these studies demonstrate the protective effects conferred to enamel, from erosion following the remineralization process in the presence of "ionic" fluoride. Under clinically relevant conditions, Sensodyne Pronamel and Sensodyne Pronamel Gentle Whitening offered superior anti-erosion performance compared to currently marketed dentifrice controls. These studies reinforce previous research indicating the importance of formulation effects on the relative remineralization performance of dentifrices under erosive conditions.
Boomhower, Steven R.; Newland, M. Christopher
2016-01-01
Adolescence is associated with the continued maturation of dopamine neurotransmission and is implicated in the etiology of many psychiatric illnesses. Adolescent exposure to neurotoxicants that distort dopamine neurotransmission, such as methylmercury (MeHg), may modify the effects of chronic d-amphetamine (d-AMP) administration on reversal learning and attentional-set shifting. Male C57Bl/6n mice were randomly assigned to two MeHg-exposure groups (0 ppm and 3 ppm) and two d-AMP-exposure groups (saline and 1 mg/kg/day), producing four treatment groups (n = 10–12/group): Control, MeHg, d-AMP, and MeHg + d-AMP. MeHg exposure (via drinking water) spanned postnatal day 21–59 (the murine adolescent period), and once daily i.p. injections of d-AMP or saline spanned postnatal day 28–42. As adults, mice were trained on a spatial-discrimination-reversal (SDR) task in which the spatial location of a lever press predicted reinforcement. Following two SDRs, a visual-discrimination task (extradimensional shift) was instated in which the presence of a stimulus light above a lever predicted reinforcement. Responding was modeled using a logistic function, which estimated the rate (slope) of a behavioral transition and trials required to complete half a transition (half-max). MeHg, d-AMP, and MeHg + d-AMP exposure increased estimates of half-max on the second reversal. MeHg exposure increased half-max and decreased the slope term following the extradimensional shift, but these effects did not occur following MeHg + d-AMP exposure. MeHg + d-AMP exposure produced more perseverative errors and omissions following a reversal. Adolescent exposure to MeHg can modify the behavioral effects of chronic d-AMP administration. PMID:28287789
Boomhower, Steven R; Newland, M Christopher
2017-04-01
Adolescence is associated with the continued maturation of dopamine neurotransmission and is implicated in the etiology of many psychiatric illnesses. Adolescent exposure to neurotoxicants that distort dopamine neurotransmission, such as methylmercury (MeHg), may modify the effects of chronic d -amphetamine ( d -AMP) administration on reversal learning and attentional-set shifting. Male C57Bl/6n mice were randomly assigned to two MeHg-exposure groups (0 ppm and 3 ppm) and two d -AMP-exposure groups (saline and 1 mg/kg/day), producing four treatment groups (n = 10-12/group): control, MeHg , d -AMP, and MeHg + d -AMP. MeHg exposure (via drinking water) spanned postnatal days 21-59 (the murine adolescent period), and once daily intraperitoneal injections of d -AMP or saline spanned postnatal days 28-42. As adults, mice were trained on a spatial-discrimination-reversal (SDR) task in which the spatial location of a lever press predicted reinforcement. Following 2 SDRs, a visual-discrimination task (extradimensional shift) was instated in which the presence of a stimulus light above a lever predicted reinforcement. Responding was modeled using a logistic function, which estimated the rate (slope) of a behavioral transition and trials required to complete half a transition (half-max). MeHg, d -AMP, and MeHg + d -AMP exposure increased estimates of half-max on the second reversal. MeHg exposure increased half-max and decreased the slope term following the extradimensional shift, but these effects did not occur following MeHg + d -AMP exposure. MeHg + d -AMP exposure produced more perseverative errors and omissions following a reversal. Adolescent exposure to MeHg can modify the behavioral effects of chronic d -AMP administration. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Wright, Diane L; Afeiche, Myriam C; Ehrlich, Shelley; Smith, Kristen; Williams, Paige L; Chavarro, Jorge E; Batsis, Maria; Toth, Thomas L; Hauser, Russ
2015-01-01
Total hair mercury (Hg) was measured among 205 women undergoing in vitro fertilization (IVF) treatment and the association with prospectively collected IVF outcomes (229 IVF cycles) was evaluated. Hair Hg levels (median=0.62ppm, range: 0.03-5.66ppm) correlated with fish intake (r=0.59), and exceeded the recommended EPA reference of 1ppm in 33% of women. Generalized linear mixed models with random intercepts accounting for within-woman correlations across treatment cycles were used to evaluate the association of hair Hg with IVF outcomes adjusted for age, body mass index, race, smoking status, infertility diagnosis, and protocol type. Hair Hg levels were not related to ovarian stimulation outcomes (peak estradiol levels, total and mature oocyte yields) or to fertilization rate, embryo quality, clinical pregnancy rate or live birth rate. Copyright © 2015 Elsevier Inc. All rights reserved.
Thermal-Error Regime in High-Accuracy Gigahertz Single-Electron Pumping
NASA Astrophysics Data System (ADS)
Zhao, R.; Rossi, A.; Giblin, S. P.; Fletcher, J. D.; Hudson, F. E.; Möttönen, M.; Kataoka, M.; Dzurak, A. S.
2017-10-01
Single-electron pumps based on semiconductor quantum dots are promising candidates for the emerging quantum standard of electrical current. They can transfer discrete charges with part-per-million (ppm) precision in nanosecond time scales. Here, we employ a metal-oxide-semiconductor silicon quantum dot to experimentally demonstrate high-accuracy gigahertz single-electron pumping in the regime where the number of electrons trapped in the dot is determined by the thermal distribution in the reservoir leads. In a measurement with traceability to primary voltage and resistance standards, the averaged pump current over the quantized plateau, driven by a 1-GHz sinusoidal wave in the absence of a magnetic field, is equal to the ideal value of e f within a measurement uncertainty as low as 0.27 ppm.
NASA Technical Reports Server (NTRS)
Gejji, Raghvendra, R.
1992-01-01
Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
NASA Astrophysics Data System (ADS)
Shobin, L. R.; Manivannan, S.
2014-10-01
Carbon nanotube (CNT) networks are identified as potential substitute and surpass the conventional indium doped tin oxide (ITO) in transparent conducting electrodes, thin-film transistors, solar cells, and chemical sensors. Among them, CNT based gas sensors gained more interest because of its need in environmental monitoring, industrial control, and detection of gases in warfare or for averting security threats. The unique properties of CNT networks such as high surface area, low density, high thermal conductivity and chemical sensitivity making them as a potential candidate for gas sensing applications. Commercial unsorted single walled carbon nanotubes (SWCNT) were purified by thermal oxidation and acid treatment processes and dispersed in organic solvent N-methyl pyrolidone using sonication process in the absence of polymer or surfactant. Optically transparent SWCNT networks are realized on glass substrate by coating the dispersed SWCNT with the help of dynamic spray coating process at 200ºC. The SWCNT random network was characterized by scanning electron microscopy and UV-vis-NIR spectroscopy. Gas sensing property of transparent film towards ammonia vapor is studied at room temperature by measuring the resistance change with respect to the concentration in the range 0-1000 ppm. The sensor response is increased logarithmically in the concentration range 0 to 1000 ppm with the detection limit 0.007 ppm. The random networks are able to detect ammonia vapor selectively because of the high electron donating nature of ammonia molecule to the SWCNT. The sensor is reversible and selective to ammonia vapor with response time 70 seconds and recovery time 423 seconds for 62.5 ppm with 90% optical transparency at 550 nm.
NASA Astrophysics Data System (ADS)
Moradi kor, Nasroallah; Akbari, Mohsen; Olfati, Ali
2016-05-01
This study was conducted to investigate the effect of different supplementation levels of Chlorella microalgae on serum metabolites and the plasma content of minerals in laying hens reared under heat stress condition (27.5-36.7 °C, variable). A total number of 378 (40 weeks of age, with mean body weight of 1390 ± 120 g) were randomly allocated to six treatments with seven replicates. The birds were randomly assigned to 6 treatments (C, T1, T2, T3, T4, and T5) with 7 replicate cages of 9 birds. C. microalgae at the rates of 100, 200, 300, 400, and 500 ppm with water were offered to groups T1, T2, T3, T4, and T5, respectively, while group C served as a control. At 71 days of trial, blood samples (14 samples per treatment) were taken for measuring serum metabolites and at 72 days for plasma mineral analysis. The results of this experiment showed that the supplementation of 200-500 ppm C. microalgae decreased the serum content of cholesterol, triglycerides, and LDL ( P < 0.05) whereas HDL content increased ( P < 0.05) in the hens supplemented with C. microalgae (300 or 400 and 500 ppm). C. microalgae at rates of 300-500 ppm caused a marked ( P < 0.05) increase in the plasma content of manganese or iodine and selenium but other minerals were not statistically different among treatments. Overall, from the results of the present experiment, it can be concluded that supplementation of C. microalgae at high rates was beneficial on blood parameters of laying hens reared under heat stress.
Wierichs, Richard J; Lausch, Julian; Meyer-Lueckel, Hendrik; Esteves-Oliveira, Marcella
2016-01-01
The aim of this double-blinded, randomized, cross-over in situ study was to evaluate the re- and demineralization characteristics of sound enamel as well as lowly and highly demineralized caries-like enamel lesions after the application of different fluoride compounds. In each of three experimental legs of 4 weeks, 21 participants wore intraoral mandibular appliances containing 4 bovine enamel specimens (2 lowly and 2 highly demineralized). Each specimen included one sound enamel and either one lowly demineralized (7 days, pH 4.95) or one highly demineralized (21 days, pH 4.95) lesion, and was positioned 1 mm below the acrylic under a plastic mesh. The three randomly allocated treatments (application only) included the following dentifrices: (1) 1,100 ppm F as NaF, (2) 1,100 ppm F as SnF2 and (3) 0 ppm F (fluoride-free) as negative control. Differences in integrated mineral loss (x0394;x0394;Z) and lesion depth (x0394;LD) were calculated between values before and after the in situ period using transversal microradiography. Of the 21 participants, 6 did not complete the study and 2 were excluded due to protocol violation. Irrespectively of the treatment, higher baseline mineral loss and lesion depth led to a less pronounced change in mineral loss and lesion depth. Except for x0394;x0394;Z of the dentifrice with 0 ppm F, sound surfaces showed significantly higher x0394;x0394;Z and x0394;LD values compared with lowly and highly demineralized lesions (p < 0.05, t test). Re- and demineralization characteristics of enamel depended directly on baseline mineral loss and lesion depth. Treatment groups should therefore be well balanced with respect to baseline mineral loss and lesion depth. © 2016 S. Karger AG, Basel.
Electro-Optic Time-to-Space Converter for Optical Detector Jitter Mitigation
NASA Technical Reports Server (NTRS)
Birnbaum, Kevin; Farr, William
2013-01-01
A common problem in optical detection is determining the arrival time of a weak optical pulse that may comprise only one to a few photons. Currently, this problem is solved by using a photodetector to convert the optical signal to an electronic signal. The timing of the electrical signal is used to infer the timing of the optical pulse, but error is introduced by random delay between the absorption of the optical pulse and the creation of the electrical one. To eliminate this error, a time-to-space converter separates a sequence of optical pulses and sends them to different photodetectors, depending on their arrival time. The random delay, called jitter, is at least 20 picoseconds for the best detectors capable of detecting the weakest optical pulses, a single photon, and can be as great as 500 picoseconds. This limits the resolution with which the timing of the optical pulse can be measured. The time-to-space converter overcomes this limitation. Generally, the time-to-space converter imparts a time-dependent momentum shift to the incoming optical pulses, followed by an optical system that separates photons of different momenta. As an example, an electro-optic phase modulator can be used to apply longitudinal momentum changes (frequency changes) that vary in time, followed by an optical spectrometer (such as a diffraction grating), which separates photons with different momenta into different paths and directs them to impinge upon an array of photodetectors. The pulse arrival time is then inferred by measuring which photodetector receives the pulse. The use of a time-to-space converter mitigates detector jitter and improves the resolution with which the timing of an optical pulse is determined. Also, the application of the converter enables the demodulation of a pulse position modulated signal (PPM) at higher bandwidths than using previous photodetector technology. This allows the creation of a receiver for a communication system with high bandwidth and high bits/photon efficiency.
Errors in radial velocity variance from Doppler wind lidar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H.; Barthelmie, R. J.; Doubrawa, P.
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Errors in radial velocity variance from Doppler wind lidar
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...
2016-08-29
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Exposure of man to mercury: a review. II. Contamination of food and analytical methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hugunin, A.G.; Bradley, R.L. Jr.
Man is exposed to mercury through every facet of his life, however, for the average citizen the most probable source of toxic levels of mercury would be his food supply. Although most foods contain less than 0.02 ppm mercury, considerable variation occurs depending on the type of food, production techniques, and location. Mercury is concentrated at higher trophic levels of food chains, particularily in aquatic food chains in which concentration factor of hundreds and thousands have been observed. The concentration of mercury in some large fish has been found to exceed the 0.5 ppm tolerance limit of the FDA andmore » the 1.0 ppm limit of the Swedish government. Fifty-seven grams of fish containing 0.5 ppm mercury in the methyl form could be consumed daily without exceeding the joint FAO/WHO recommended weekly tolerable intake of 0.2 mg. In the US, Sweden, and Japan the per capita daily fish consumptions are 18, 56, and 88 g, respectively. Determination of mercury concentrations generally involves colorimetric, atomic absorption or emission spectrometry, neutron activation, or gas chromatography techniques. The sample preparations are often time consuming, subject to numerous sources of error, and complicated by the low concentrations of mercury. Differentiation of mercury compounds usually necessitates selective extraction followed by gas chromatographic analysis. 256 references, 5 tables.« less
NASA Astrophysics Data System (ADS)
Mueller, K.; Yadav, V.; Lopez-Coto, I.; Karion, A.; Gourdji, S.; Martin, C.; Whetstone, J.
2018-03-01
There is increased interest in understanding urban greenhouse gas (GHG) emissions. To accurately estimate city emissions, the influence of extraurban fluxes must first be removed from urban greenhouse gas (GHG) observations. This is especially true for regions, such as the U.S. Northeastern Corridor-Baltimore/Washington, DC (NEC-B/W), downwind of large fluxes. To help site background towers for the NEC-B/W, we use a coupled Bayesian Information Criteria and geostatistical regression approach to help site four background locations that best explain CO2 variability due to extraurban fluxes modeled at 12 urban towers. The synthetic experiment uses an atmospheric transport and dispersion model coupled with two different flux inventories to create modeled observations and evaluate 15 candidate towers located along the urban domain for February and July 2013. The analysis shows that the average ratios of extraurban inflow to total modeled enhancements at urban towers are 21% to 36% in February and 31% to 43% in July. In July, the incoming air dominates the total variability of synthetic enhancements at the urban towers (R2 = 0.58). Modeled observations from the selected background towers generally capture the variability in the synthetic CO2 enhancements at urban towers (R2 = 0.75, root-mean-square error (RMSE) = 3.64 ppm; R2 = 0.43, RMSE = 4.96 ppm for February and July). However, errors associated with representing background air can be up to 10 ppm for any given observation even with an optimal background tower configuration. More sophisticated methods may be necessary to represent background air to accurately estimate urban GHG emissions.
Changes in Atmospheric CO2 Influence the Allergenicity of Aspergillus fumigatus fungal spore
NASA Astrophysics Data System (ADS)
Lang-Yona, N.; Levin, Y.; Dannemoller, K. C.; Yarden, O.; Peccia, J.; Rudich, Y.
2013-12-01
Increased allergic susceptibility has been documented without a comprehensive understanding for its causes. Therefore understanding trends and mechanisms of allergy inducing agents is essential. In this study we investigated whether elevated atmospheric CO2 levels can affect the allergenicity of Aspergillus fumigatus, a common allergenic fungal species. Both direct exposure to changing CO2 levels during fungal growth, and indirect exposure through changes in the C:N ratios in the growth media were inspected. We determined the allergenicity of the spores through two types of immunoassays, accompanied with genes expression analysis, and proteins relative quantification. We show that fungi grown under present day CO2 levels (392 ppm) exhibit 8.5 and 3.5 fold higher allergenicity compared to fungi grown at preindustrial (280 ppm) and double (560 ppm) CO2 levels, respectively. A corresponding trend is observed in the expression of genes encoding for known allergenic proteins and in the major allergen Asp f1 concentrations, possibly due to physiological changes such as respiration rates and the nitrogen content of the fungus, influenced by the CO2 concentrations. Increased carbon and nitrogen levels in the growth medium also lead to a significant increase in the allergenicity, for which we propose two different biological mechanisms. We suggest that climatic changes such as increasing atmospheric CO2 levels and changes in the fungal growth medium may impact the ability of allergenic fungi such as Aspergillus fumigatus to induce allergies. The effect of changing CO2 concentrations on the total allergenicity per 10^7 spores of A. fumigatus (A), the major allergen Asp f1 concentration in ng per 10^7 spores (B), and the gene expression by RT-PCR (C). The error bars represent the standard error of the mean.
Simulation of wave propagation in three-dimensional random media
NASA Astrophysics Data System (ADS)
Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1995-04-01
Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of
NASA Astrophysics Data System (ADS)
Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.
2014-07-01
An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier transform spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information are not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs have typical biases of 60 m relative to those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (collision-induced absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20 m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container), CONTRAIL (Comprehensive Observation Network for Trace gases by Airline), and HIPPO (HIAPER Pole-to-Pole Observations), yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS data set is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.6 ± 0.4 ppm year-1, in agreement with the currently accepted global growth rate based on ground-based measurements.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Burgués, Javier; Marco, Santiago
2018-08-17
Metal oxide semiconductor (MOX) sensors are usually temperature-modulated and calibrated with multivariate models such as partial least squares (PLS) to increase the inherent low selectivity of this technology. The multivariate sensor response patterns exhibit heteroscedastic and correlated noise, which suggests that maximum likelihood methods should outperform PLS. One contribution of this paper is the comparison between PLS and maximum likelihood principal components regression (MLPCR) in MOX sensors. PLS is often criticized by the lack of interpretability when the model complexity increases beyond the chemical rank of the problem. This happens in MOX sensors due to cross-sensitivities to interferences, such as temperature or humidity and non-linearity. Additionally, the estimation of fundamental figures of merit, such as the limit of detection (LOD), is still not standardized in multivariate models. Orthogonalization methods, such as orthogonal projection to latent structures (O-PLS), have been successfully applied in other fields to reduce the complexity of PLS models. In this work, we propose a LOD estimation method based on applying the well-accepted univariate LOD formulas to the scores of the first component of an orthogonal PLS model. The resulting LOD is compared to the multivariate LOD range derived from error-propagation. The methodology is applied to data extracted from temperature-modulated MOX sensors (FIS SB-500-12 and Figaro TGS 3870-A04), aiming at the detection of low concentrations of carbon monoxide in the presence of uncontrolled humidity (chemical noise). We found that PLS models were simpler and more accurate than MLPCR models. Average LOD values of 0.79 ppm (FIS) and 1.06 ppm (Figaro) were found using the approach described in this paper. These values were contained within the LOD ranges obtained with the error-propagation approach. The mean LOD increased to 1.13 ppm (FIS) and 1.59 ppm (Figaro) when considering validation samples collected two weeks after calibration, which represents a 43% and 46% degradation, respectively. The orthogonal score-plot was a very convenient tool to visualize MOX sensor data and to validate the LOD estimates. Copyright © 2018 Elsevier B.V. All rights reserved.
Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth
2006-07-01
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.
Aplication of giberelins on flowering and yield of two varieties of shallot in lowland
NASA Astrophysics Data System (ADS)
Triharyanto, E.; Nyoto, S.; Yusrifani, I.
2018-03-01
Shallot is one of horticultural commodities which has difficulties in flowering and producing seeds. Flowering of shallot generally occurs in highlands at 9-12° C. Flowering in lowland can be supported with vernalization or replace cold temperature with gibberelin (GA3). This research is aimed to determine the effect of varieties, the concentration of GA3 applied and their interaction on flowering and yield of shallots grown in the lowlands, 98 m altitude with Vertisol-type soil. The research was conducted in a randomized complete block design (RCBD) with two factors, which were varieties (V): Bima and Mentes and concentration of GA3 (C) 0 ppm, 50 ppm, 100 ppm, 150 ppm and 200 ppm and repeated three times. Varieties have significant effect (P<0.05) on plant height, leaf number, the number of bulbs per clumps, weight of fresh bulbs per clumps, and percentage of small and large bulbs produced. Bima varieties were able to flower and produced seeds, while Mentes varieties could not produce flowers and seeds. GA3 concentration have no significant effect in all of the observed component. GA3 can’t replace cold temperature for supported flowering in varieties Mentes which were planted in lowlands. There was interaction between varieties and GA3 concentration occurs in variable percentage of small and large bulbs.
Hess, Sonja Y; Ouédraogo, Césaire T; Young, Rebecca R; Bamba, Ibrahim F; Stinca, Sara; Zimmermann, Michael B; Wessells, K Ryan
2017-05-01
To assess iodine status among pregnant women in rural Zinder, Niger and to compare their status with the iodine status of school-aged children from the same households. Seventy-three villages in the catchment area of sixteen health centres were randomly selected to participate in the cross-sectional survey. Salt iodization is mandatory in Niger, requiring 20-60 ppm iodine at the retail level. A spot urine sample was collected from randomly selected pregnant women (n 662) and one school-aged child from the same household (n 373). Urinary iodine concentration (UIC) was assessed as an indicator of iodine status in both groups. Dried blood spots (DBS) were collected from venous blood samples of pregnant women and thyroglobulin (Tg), thyroid-stimulating hormone and total thyroxine were measured. Iodine content of household salt samples (n 108) was assessed by titration. Median iodine content of salt samples was 5·5 ppm (range 0-41 ppm), 98 % had an iodine content 40 µg/l. In this region of Niger, most salt is inadequately iodized. UIC in pregnant women indicated iodine deficiency, whereas UIC of school-aged children indicated marginally adequate iodine status. Thus, estimating population iodine status based solely on monitoring of UIC among school-aged children may underestimate the risk of iodine deficiency in pregnant women.
Davies, G M; Worthington, H V; Ellwood, R P; Bentley, E M; Blinkhorn, A S; Taylor, G O; Davies, R M
2002-09-01
To assess the impact of regularly supplying free fluoride toothpaste regularly to children, initially aged 12 months, and living in deprived areas of the north west of England on the level of caries in the deciduous dentition at 5-6 years of age. A further aim was to compare the effectiveness of a programme using a toothpaste containing 440 ppmF (Colgate 0-6 Gel) with one containing 1,450 ppmF (Colgate Great Regular Flavour) in reducing caries. Randomised controlled parallel group clinical trial. Clinical data were collected from test and control groups when the children were 5-6 years old. A programme of posting toothpaste with dental health messages to the homes of children initially aged 12 months. Clinical examinations took place in primary schools. 7,422 children born in 3-month birth cohorts living in high caries areas in nine health districts in north west England. Within each district children were randomly assigned to test or control groups. Toothpaste, containing either 440 ppmF or 1450 ppmF, and dental health literature posted at three monthly intervals to children in test groups until they were aged 5-6 years. The dmft index, missing teeth and the prevalence of caries experience. An analysis of 3,731 children who were examined and remained in the programme showed the mean dmft to be 2.15 for the group who had received 1,450 ppmF toothpaste and 2.49 for the 440 ppmF group. The mean dmft for the control group was 2.57. This 16% reduction between the 1,450 ppmF and control group was statistically significant (P<0.05). The difference between the 440 ppmF group and control was not significant. Further analyses to estimate the population effect of the programme also confirmed this relationship. This study demonstrates that a programme distributing free toothpaste containing 1,450 ppmF provides a significant clinical benefit for high caries risk children living in deprived, non-fluoridated districts.
Fottrell, Edward; Byass, Peter; Berhane, Yemane
2008-03-25
As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.
Geographical mapping of fluoride levels in drinking water sources in Nigeria.
Akpata, Enosakhare S; Danfillo, I S; Otoh, E C; Mafeni, J O
2009-12-01
Knowledge of fluoride levels in drinking water is of importance in dental public health, yet this information is lacking, at national level, in Nigeria. To map out fluoride levels in drinking water sources in Nigeria. Fluoride levels in drinking water sources from 109 randomly selected Local Government Areas (LGAs) in the 6 Nigerian geopolitical zones were determined. From the results, maps showing LGAs with fluoride concentrations exceeding 0.3 ppm, were drawn. ANOVA and t-test were used to determine the significance of the differences between the fluoride levels in the drinking water sources. Fluoride levels were low in most parts of the country, being 0.3 ppm or less in 62% of the LGAs. Fluoride concentrations were generally higher in North Central geopolitical zone, than the other zones in the country (p<0.05). In a few drinking water sources, fluoride concentrations exceeded 1.5 ppm, but was as high as 6.7 ppm in one well. Only 9% of the water sources were from waterworks. Most of the water sources in Nigeria contained low fluoride levels; but few had excessive concentrations and need to be partially defluoridated, or else alternative sources of drinking water provided for the community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
NASA Astrophysics Data System (ADS)
Sutherland, Frederick L.; Graham, Ian T.; Harris, Stephen J.; Coldham, Terry; Powell, William; Belousova, Elena A.; Martin, Laure
2017-05-01
Rare ruby crystals appear among prevailing sapphire crystals mined from placers within basaltic areas in the New England gem-field, New South Wales, Australia. New England ruby (NER) has distinctive trace element features compared to those from ruby elsewhere in Australia and indeed most ruby from across the world. The NER suite includes ruby (up to 3370 ppm Cr), pink sapphire (up to 1520 ppm Cr), white sapphire (up to 910 ppm) and violet, mauve, purple, or bluish sapphire (up to 1410 ppm Cr). Some crystals show outward growth banding in this respective colour sequence. All four colour zones are notably high in Ga (up to 310 ppm) and Si (up to 1820 ppm). High Ga and Ga/Mg values are unusual in ruby and its trace element plots (laser ablation-inductively coupled plasma-mass spectrometry) and suggests that magmatic-metasomatic inputs were involved in the NER suite genesis. In situ oxygen isotope analyses (secondary ion mass spectrometry) across the NER suite colour range showed little variation (n = 22; δ18O = 4.4 ± 0.4, 2σ error), and are values typical for corundum associated with ultramafic/mafic rocks. The isolated NER xenocryst suite, corroded by basalt transport and with few internal inclusions, presents a challenge in deciphering its exact origin. Detailed consideration of its high Ga chemistry in relation to the known geology of the surrounding region was used to narrow down potential sources. These include Late Palaeozoic-Triassic fractionated I-type granitoid magmas or Mesozoic-Cenozoic felsic fractionates from basaltic magmas that interacted with early Palaeozoic Cr-bearing ophiolite bodies in the New England Orogen. Other potential sources may lie deeper within lower crust-mantle metamorphic assemblages, but need to match the anomalous high-Ga geochemistry of the New England ruby suite.
NASA Astrophysics Data System (ADS)
Zou, Guang'an; Wang, Qiang; Mu, Mu
2016-09-01
Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Simulation of the Effects of Random Measurement Errors
ERIC Educational Resources Information Center
Kinsella, I. A.; Hannaidh, P. B. O.
1978-01-01
Describes a simulation method for measurement of errors that requires calculators and tables of random digits. Each student simulates the random behaviour of the component variables in the function and by combining the results of all students, the outline of the sampling distribution of the function can be obtained. (GA)
NASA Astrophysics Data System (ADS)
Hyung, E.; Jacobsen, S. B.
2017-12-01
The decay of 146Sm to 142Nd is an excellent a tracer for early silicate differentiation events in the terrestrial planets, as the Sm/Nd ratio is usually fractionated during mantle partial melting and magma ocean crystallization. The short half-life (103 or 68 Ma) renders the system extinct within the first 500 Ma of Solar System formation. Samples with 142Nd/144Nd ratios that are substantially different from the bulk silicate Earth value of 142Nd/144Nd provide clear evidence for mantle differentiation in the Hadean. Published data for the 3.4 to 3.8 Ga old Isua supracrustal rocks and dykes have demonstrated both positive and negative 142Nd/144Nd anomalies (30 ppm range) providing clear evidence for Hadean enriched and depleted mantle reservoirs. In contrast, no 142Nd/144Nd anomalies have been found in modern day terrestrial samples with data that have 2σ uncertainties of about 5 ppm or more. Last year we reported improvements in 142Nd/144Nd measurements, using our IsotopX thermal ionization mass spectrometer, and obtained reproducibility of 142Nd/144Nd ratios to better than 2 ppm at the 2σ level. With this external reproducibility we found that all except one modern mantle-derived basalt had within error identical 142Nd/144Nd ratios. One sample is about 3.4 ppm lower than the rest of the modern basalt samples, providing evidence for some limited Hadean mantle differentiation signatures preserved up to present. We have also measured 142Nd/144Nd ratios for Proterozoic and Phanerozoic samples, whose ages range from 300 Ma to 2 Ga, to better than 2 ppm external reproducibility (2σ). Most of these samples also have 142Nd/144Nd ratios that cluster around the modern day value, but there are some samples that are either marginally high by 2 ppm or low by 2 ppm. Thus, while a 20 to 30 ppm range in 142Nd/144Nd is well resolved in the Archean, such large variability is not present in the Proterozoic and Phanerozoic. The relatively rapid changeover at the end of the Archean has important implications for understanding the mixing rate of the mantle through time.
Time averaging of NMR chemical shifts in the MLF peptide in the solid state.
De Gortari, Itzam; Portella, Guillem; Salvatella, Xavier; Bajaj, Vikram S; van der Wel, Patrick C A; Yates, Jonathan R; Segall, Matthew D; Pickard, Chris J; Payne, Mike C; Vendruscolo, Michele
2010-05-05
Since experimental measurements of NMR chemical shifts provide time and ensemble averaged values, we investigated how these effects should be included when chemical shifts are computed using density functional theory (DFT). We measured the chemical shifts of the N-formyl-L-methionyl-L-leucyl-L-phenylalanine-OMe (MLF) peptide in the solid state, and then used the X-ray structure to calculate the (13)C chemical shifts using the gauge including projector augmented wave (GIPAW) method, which accounts for the periodic nature of the crystal structure, obtaining an overall accuracy of 4.2 ppm. In order to understand the origin of the difference between experimental and calculated chemical shifts, we carried out first-principles molecular dynamics simulations to characterize the molecular motion of the MLF peptide on the picosecond time scale. We found that (13)C chemical shifts experience very rapid fluctuations of more than 20 ppm that are averaged out over less than 200 fs. Taking account of these fluctuations in the calculation of the chemical shifts resulted in an accuracy of 3.3 ppm. To investigate the effects of averaging over longer time scales we sampled the rotameric states populated by the MLF peptides in the solid state by performing a total of 5 micros classical molecular dynamics simulations. By averaging the chemical shifts over these rotameric states, we increased the accuracy of the chemical shift calculations to 3.0 ppm, with less than 1 ppm error in 10 out of 22 cases. These results suggests that better DFT-based predictions of chemical shifts of peptides and proteins will be achieved by developing improved computational strategies capable of taking into account the averaging process up to the millisecond time scale on which the chemical shift measurements report.
Evaluation of UT/LS hygrometer accuracy by intercomparison during the NASA MACPEX mission.
Rollins, A W; Thornberry, T D; Gao, R S; Smith, J B; Sayres, D S; Sargent, M R; Schiller, C; Krämer, M; Spelten, N; Hurst, D F; Jordan, A F; Hall, E G; Vömel, H; Diskin, G S; Podolske, J R; Christensen, L E; Rosenlof, K H; Jensen, E J; Fahey, D W
2014-02-27
Acquiring accurate measurements of water vapor at the low mixing ratios (< 10 ppm) encountered in the upper troposphere and lower stratosphere (UT/LS) has proven to be a significant analytical challenge evidenced by persistent disagreements between high-precision hygrometers. These disagreements have caused uncertainties in the description of the physical processes controlling dehydration of air in the tropical tropopause layer and entry of water into the stratosphere and have hindered validation of satellite water vapor retrievals. A 2011 airborne intercomparison of a large group of in situ hygrometers onboard the NASA WB-57F high-altitude research aircraft and balloons has provided an excellent opportunity to evaluate progress in the scientific community toward improved measurement agreement. In this work we intercompare the measurements from the Midlatitude Airborne Cirrus Properties Experiment (MACPEX) and discuss the quality of agreement. Differences between values reported by the instruments were reduced in comparison to some prior campaigns but were nonnegligible and on the order of 20% (0.8 ppm). Our analysis suggests that unrecognized errors in the quantification of instrumental background for some or all of the hygrometers are a likely cause. Until these errors are understood, differences at this level will continue to somewhat limit our understanding of cirrus microphysical processes and dehydration in the tropical tropopause layer.
Evaluation of UT/LS hygrometer accuracy by intercomparison during the NASA MACPEX mission
Rollins, A. W.; Thornberry, T. D.; Gao, R. S.; Smith, J. B.; Sayres, D. S.; Sargent, M. R.; Schiller, C.; Krämer, M.; Spelten, N.; Hurst, D. F.; Jordan, A. F.; Hall, E. G.; Vömel, H.; Diskin, G. S.; Podolske, J. R.; Christensen, L. E.; Rosenlof, K. H.; Jensen, E. J.; Fahey, D. W.
2017-01-01
Acquiring accurate measurements of water vapor at the low mixing ratios (< 10 ppm) encountered in the upper troposphere and lower stratosphere (UT/LS) has proven to be a significant analytical challenge evidenced by persistent disagreements between high-precision hygrometers. These disagreements have caused uncertainties in the description of the physical processes controlling dehydration of air in the tropical tropopause layer and entry of water into the stratosphere and have hindered validation of satellite water vapor retrievals. A 2011 airborne intercomparison of a large group of in situ hygrometers onboard the NASA WB-57F high-altitude research aircraft and balloons has provided an excellent opportunity to evaluate progress in the scientific community toward improved measurement agreement. In this work we intercompare the measurements from the Midlatitude Airborne Cirrus Properties Experiment (MACPEX) and discuss the quality of agreement. Differences between values reported by the instruments were reduced in comparison to some prior campaigns but were nonnegligible and on the order of 20% (0.8 ppm). Our analysis suggests that unrecognized errors in the quantification of instrumental background for some or all of the hygrometers are a likely cause. Until these errors are understood, differences at this level will continue to somewhat limit our understanding of cirrus microphysical processes and dehydration in the tropical tropopause layer. PMID:28845379
Development of multiple-eye PIV using mirror array
NASA Astrophysics Data System (ADS)
Maekawa, Akiyoshi; Sakakibara, Jun
2018-06-01
In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .
Smith, Kirk R; McCracken, John P; Thompson, Lisa; Edwards, Rufus; Shields, Kyra N; Canuz, Eduardo; Bruce, Nigel
2010-07-01
During the first randomized intervention trial (RESPIRE: Randomized Exposure Study of Pollution Indoors and Respiratory Effects) in air pollution epidemiology, we pioneered application of passive carbon monoxide (CO) diffusion tubes to measure long-term personal exposures to woodsmoke. Here we report on the protocols and validations of the method, trends in personal exposure for mothers and their young children, and the efficacy of the introduced improved chimney stove in reducing personal exposures and kitchen concentrations. Passive diffusion tubes originally developed for industrial hygiene applications were deployed on a quarterly basis to measure 48-hour integrated personal carbon monoxide exposures among 515 children 0-18 months of age and 532 mothers aged 15-55 years and area samples in a subsample of 77 kitchens, in households randomized into control and intervention groups. Instrument comparisons among types of passive diffusion tubes and against a continuous electrochemical CO monitor indicated that tubes responded nonlinearly to CO, and regression calibration was used to reduce this bias. Before stove introduction, the baseline arithmetic (geometric) mean 48-h child (n=270), mother (n=529) and kitchen (n=65) levels were, respectively, 3.4 (2.8), 3.4 (2.8) and 10.2 (8.4) p.p.m. The between-group analysis of the 3355 post-baseline measurements found CO levels to be significantly lower among the intervention group during the trial period: kitchen levels: -90%; mothers: -61%; and children: -52% in geometric means. No significant deterioration in stove effect was observed over the 18 months of surveillance. The reliability of these findings is strengthened by the large sample size made feasible by these unobtrusive and inexpensive tubes, measurement error reduction through instrument calibration, and a randomized, longitudinal study design. These results from the first randomized trial of improved household energy technology in a developing country and demonstrate that a simple chimney stove can substantially reduce chronic exposures to harmful indoor air pollutants among women and infants.
SMITH, KIRK R.; McCRACKEN, JOHN P.; THOMPSON, LISA; EDWARDS, RUFUS; SHIELDS, KYRA N.; CANUZ, EDUARDO; BRUCE, NIGEL
2015-01-01
During the first randomized intervention trial (RESPIRE: Randomized Exposure Study of Pollution Indoors and Respiratory Effects) in air pollution epidemiology, we pioneered application of passive carbon monoxide (CO) diffusion tubes to measure long-term personal exposures to woodsmoke. Here we report on the protocols and validations of the method, trends in personal exposure for mothers and their young children, and the efficacy of the introduced improved chimney stove in reducing personal exposures and kitchen concentrations. Passive diffusion tubes originally developed for industrial hygiene applications were deployed on a quarterly basis to measure 48-hour integrated personal carbon monoxide exposures among 515 children 0–18 months of age and 532 mothers aged 15–55 years and area samples in a subsample of 77 kitchens, in households randomized into control and intervention groups. Instrument comparisons among types of passive diffusion tubes and against a continuous electrochemical CO monitor indicated that tubes responded nonlinearly to CO, and regression calibration was used to reduce this bias. Before stove introduction, the baseline arithmetic (geometric) mean 48-h child (n=270), mother (n=529) and kitchen (n=65) levels were, respectively, 3.4 (2.8), 3.4 (2.8) and 10.2 (8.4) p.p.m. The between-group analysis of the 3355 post-baseline measurements found CO levels to be significantly lower among the intervention group during the trial period: kitchen levels: −90%; mothers: −61%; and children: −52% in geometric means. No significant deterioration in stove effect was observed over the 18 months of surveillance. The reliability of these findings is strengthened by the large sample size made feasible by these unobtrusive and inexpensive tubes, measurement error reduction through instrument calibration, and a randomized, longitudinal study design. These results from the first randomized trial of improved household energy technology in a developing country and demonstrate that a simple chimney stove can substantially reduce chronic exposures to harmful indoor air pollutants among women and infants. PMID:19536077
NASA Astrophysics Data System (ADS)
Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.
2011-10-01
Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.
The Bnl Muon Anomalous Magnetic Moment Measurement
NASA Astrophysics Data System (ADS)
Hertzog, David W.
2003-09-01
The E821 experiment at Brookhaven National Laboratory is designed to measure the muon magnetic anomaly, aμ, to an ultimate precision of 0.4 parts per million (ppm). Because theory can predict aμ to 0.6 ppm, and ongoing efforts aim to reduce this uncertainty, the comparison represents an important and sensitive test of new physics. At the time of this Workshop, the reported experimental result from the 1999 running period achieved a
At least some errors are randomly generated (Freud was wrong)
NASA Technical Reports Server (NTRS)
Sellen, A. J.; Senders, J. W.
1986-01-01
An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
Cut flowers: a potential pesticide hazard.
Morse, D L; Baker, E L; Landrigan, P J
1979-01-01
Following reports of ten cases of possible organophosphate pesticide poisoning in florists exposed to pesticide residues on cut flowers, we conducted a prospective random-sample survey to determine residual pesticide levels on flowers imported into the United States via Miami, Florida. A sample of all flowers imported into Miami on three days in January 1977 showed that 18 (17.7 per cent) of 105 lots contained pesticide residue levels greater than 5 ppm, and that three lots had levels greater than 400 ppm. Azodrin (monocrotophos) was the most important contaminant with levels of 7.7--4,750 ppm detected in nine lots. We examined 20 quarantine workers in Miami and 12 commercial florists exposed to contaminated flowers. Occasional nonspecific symptoms compatible with possible organophosphate exposure were noted, but we found no abnormalities in plasma or red blood cell cholinesterase levels. This study documents a previously unrecognized potential source of occupational pesticide exposure and suggests that safety standards should be set for residue levels on cut flowers. PMID:420356
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
In vitro dentine remineralization with a potential salivary phosphoprotein homologue.
Romero, Maria Jacinta Rosario H; Nakashima, Syozi; Nikaido, Toru; Sadr, Alireza; Tagami, Junji
2016-08-01
Advantages of introducing a salivary phosphoprotein homologue under standardized in vitro conditions to simulate the mineral-stabilizing properties of saliva have been proposed. This study longitudinally investigates the effects of casein, incorporated as a potential salivary phosphoprotein homologue in artificial saliva (AS) solutions with/without fluoride (F) on in vitro dentine lesion remineralization. Thin sections of bovine root dentine were demineralized and allocated randomly into 6 groups (n=18) having equivalent mineral loss (ΔZ) after transverse microradiography (TMR). The specimens were remineralized using AS solutions containing casein 0μg/ml, F 0ppm (C0-F0); casein 0μg/ml, F 1ppm (C0-F1); casein 10μg/ml, F 0ppm (C10-F0); casein 10μg/ml, F 1ppm (C10-F1); casein 100μg/ml, F 0ppm (C100-F0) or casein 100μg/ml, F 1ppm (C100-F1) for 28days with TMR taken every 7 days. Surface mineral precipitation, evident in group C0-F1, was apparently inhibited in groups with casein incorporation. Repeated measures ANOVA with Bonferroni correction revealed higher ΔZ for non-F and non-casein groups than for their counterparts (p<0.001). Subsequent multiple comparisons showed that mineral gain was higher (p<0.001) with 10μg/ml casein than with 100μg/ml when F was present in the earlier stages of remineralization, with both groups achieving almost complete remineralization after 28 days. Casein is a potential salivary phosphoprotein homologue that could be employed for in vitro dentine remineralization studies. Concentration related effects may be clinically significant and thus must be further examined. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koenig, J.Q.; Covert, D.S.; Smith, M.S.
Separate exposures to 0.12 ppm ozone (O3) or 0.18 ppm nitrogen dioxide (NO2) have not demonstrated consistent changes in pulmonary function in adolescent subjects. However, in polluted urban air, O3 and NO2 occur in combination. Therefore, this project was designed to investigate the pulmonary effects of combined O3 and NO2 exposures during intermittent exercise in adolescent subjects. Twelve healthy and twelve well-characterized asthmatic adolescent subjects were exposed randomly to clean air or 0.12 ppm O3 and 0.30 ppm NO2 alone or in combination during 60 minutes of intermittent moderate exercise (32.5 1/min). The inhalation exposures were carried out while themore » subjects breathed on a rubber mouthpiece with nose clips in place. The following pulmonary functional values were measured before and after exposure: peak flow, total respiratory resistance, maximal flow at 50 and 75 percent of expired vital capacity, forced expiratory volume in one second and forced vital capacity (FVC). Statistical significance of pulmonary function changes was tested by analysis of covariance for repeated measures. After exposure to 0.12 ppm O3 a significant decrease was seen in maximal flow at 50% of FVC in asthmatic subjects. After exposure to 0.30 ppm NO2 a significant decrease was seen in FVC also in the asthmatic subjects. One possible explanation for these changes is the multiple comparison effect. No significant changes in any parameters were seen in the asthmatic subjects after the combined O3-NO2 exposure or in the healthy subjects after any of the exposures.« less
Austin, R S; Rodriguez, J M; Dunne, S; Moazzez, R; Bartlett, D W
2010-10-01
To investigate the effect of an aqueous sodium fluoride solution of increasing concentration on erosion and attrition of enamel and dentine in vitro. Enamel and dentine sections from caries-free human third molars were polished flat and taped (exposing a 3 mm x 3 mm area) before being randomly allocated to 1 of 5 groups per substrate (n=10/gp): G1 (distilled water control); G2 (225 ppm NaF); G3 (1450 ppm NaF); G4 (5000 ppm NaF); G5 (19,000 ppm NaF). All specimens were subjected to 5, 10 and 15 cycles of experimental wear [1 cycle=artificial saliva (2h, pH 7.0)+erosion (0.3% citric acid, pH 3.2, 5 min)+fluoride/control (5 min)+attrition (60 linear strokes in artificial saliva from enamel antagonists loaded to 300 g)]. Following tape removal, step height (SH) in mum was measured using optical profilometry. When the number of cycles increased the amount of tooth surface loss increased significantly in enamel and dentine after attrition and erosion and for dentine after attrition. Attrition and erosion resulted in greater surface loss than attrition alone after 15 cycles of experimental wear of enamel. 5000 ppm and 19,000 ppm sodium fluoride solutions had a protective effect on erosive and attritional enamel tooth wear in vitro, however no other groups showed significant differences. The more intensive the fluoride regime the more protection was afforded to enamel from attrition and erosion. However, in this study no such protective effect was demonstrated for dentine. Copyright 2010 Elsevier Ltd. All rights reserved.
Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392
Dietary oregano essential oil alleviates experimentally induced coccidiosis in broilers.
Mohiti-Asli, M; Ghanaatparast-Rashti, M
2015-06-15
An experiment was conducted to determine the effects of oregano essential oil on growth performance and coccidiosis prevention in mild challenged broilers. A total of 250 1-d-old chicks were used in a completely randomized design with 5 treatments and 5 replicates with 10 birds in each replication. Experimental treatments included: (1) negative control (NC; unchallenged), (2) positive control (PC; challenged with sporulated oocysts of Eimeria), (3) PC fed 200 ppm Diclazuril in diet, (4) PC fed 300 ppm oregano oil in diet, and (5) PC fed 500 ppm oregano oil in diet. At 22 d of age, all the experimental groups except for NC were challenged with 50-fold dose of Livacox T as a trivalent live attenuated coccidiosis vaccine. On d 28, two birds were slaughtered and intestinal coccidiosis lesions were scored 0-4. Moreover, dropping was scored in the scale of 0-3, and oocysts per gram feces (OPG) were measured. Oregano oil at either supplementation rate increased body weight gain (P=0.039) and improved feed conversion ratio (P=0.010) from d 22 to 28, when compared with PC group. Using 500 ppm oregano oil in challenged broilers diet increased European efficiency factor than PC group (P=0.020). Moreover, challenged broilers fed 500 ppm oregano oil or Diclazuril in diets displayed lower coccidiosis lesions scores in upper (P=0.003) and middle (P=0.018) regions of intestine than PC group, with the effect being similar to unchallenged birds. In general, challenged birds fed 500 ppm oregano oil or Diclazuril in diets had lower OPG (P=0.001), dropping scores (P=0.001), litter scores (P=0.001), and pH of litter (P=0.001) than PC group. It could be concluded that supplementation of oregano oil at the dose of 500 ppm in diet may have beneficial effect on prevention of coccidiosis in broilers. Copyright © 2015 Elsevier B.V. All rights reserved.
Space-Based CO2 Active Optical Remote Sensing using 2-μm Triple-Pulse IPDA Lidar
NASA Astrophysics Data System (ADS)
Singh, Upendra; Refaat, Tamer; Ismail, Syed; Petros, Mulugeta
2017-04-01
Sustained high-quality column CO2 measurements from space are required to improve estimates of regional and global scale sources and sinks to attribute them to specific biogeochemical processes for improving models of carbon-climate interactions and to reduce uncertainties in projecting future change. Several studies show that space-borne CO2 measurements offer many advantages particularly over high altitudes, tropics and southern oceans. Current satellite-based sensing provides rapid CO2 monitoring with global-scale coverage and high spatial resolution. However, these sensors are based on passive remote sensing, which involves limitations such as full seasonal and high latitude coverage, poor sensitivity to the lower atmosphere, retrieval complexities and radiation path length uncertainties. CO2 active optical remote sensing is an alternative technique that has the potential to overcome these limitations. The need for space-based CO2 active optical remote sensing using the Integrated Path Differential Absorption (IPDA) lidar has been advocated by the Advanced Space Carbon and Climate Observation of Planet Earth (A-Scope) and Active Sensing of CO2 Emission over Nights, Days, and Seasons (ASCENDS) studies in Europe and the USA. Space-based IPDA systems can provide sustained, high precision and low-bias column CO2 in presence of thin clouds and aerosols while covering critical regions such as high latitude ecosystems, tropical ecosystems, southern ocean, managed ecosystems, urban and industrial systems and coastal systems. At NASA Langley Research Center, technology developments are in progress to provide high pulse energy 2-μm IPDA that enables optimum, lower troposphere weighted column CO2 measurements from space. This system provides simultaneous ranging; information on aerosol and cloud distributions; measurements over region of broken clouds; and reduces influences of surface complexities. Through the continual support from NASA Earth Science Technology Office, current efforts are focused on developing an aircraft-based 2-μm triple-pulse IPDA lidar for independent and simultaneous monitoring of CO2 and water vapor (H2O). Triple-pulse IPDA design, development and integration is based on the knowledge gathered from the successful demonstration of the airborne CO2 2-μm double-pulse IPDA lidar. IPDA transmitter enhancements include generating high-energy (80 mJ) and high repetition rate (50Hz) three successive pulses using a single pump pulse. IPDA receiver enhancement include an advanced, low noise (1 fW/Hz1/2) MCT e-APD detection system for improved measurement sensitivity. In place of H2O sensing, the triple-pulse IPDA can be tuned to measure CO2 with two different weighting functions using two on-lines and a common off-line. Modeling of a space-based high-energy 2-µm triple-pulse IPDA lidar was conducted to demonstrate CO2 measurement capability and to evaluate random and systematic errors. Projected performance shows <0.12% random error and <0.07% residual systematic error. These translate to near-optimum 0.5 ppm precision and 0.3 ppm bias in low-tropospheric column CO2 mixing ratio measurements from space for 10 second signal averaging over Railroad Valley reference surface using US Standard atmospheric model. In addition, measurements can be optimized by tuning on-lines based upon ground target scenarios, environment and science objectives. With 10 MHz detection bandwidth, surface ranging with an uncertainty of <3 m can be achieved as demonstrated from earlier airborne flights.
Random Error in Judgment: The Contribution of Encoding and Retrieval Processes
ERIC Educational Resources Information Center
Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.
2009-01-01
Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Multi-layer Retrievals of Greenhouse Gases from a Combined Use of GOSAT TANSO-FTS SWIR and TIR
NASA Astrophysics Data System (ADS)
Kikuchi, N.; Kuze, A.; Kataoka, F.; Shiomi, K.; Hashimoto, M.; Suto, H.; Knuteson, R. O.; Iraci, L. T.; Yates, E. L.; Gore, W.; Tanaka, T.; Yokota, T.
2016-12-01
The TANSO-FTS sensor onboard GOSAT has three frequency bands in the shortwave infrared (SWIR) and the fourth band in the thermal infrared (TIR). Observations of high-resolution spectra of reflected sunlight in the SWIR are extensively utilized to retrieve column-averaged concentrations of the major greenhouse gases such as carbon dioxide (XCO2) and methane (XCH4). Although global XCO2 and XCH4 distribution retrieved from SWIR data can reduce the uncertainty in the current knowledge about sources and sinks of these gases, information on the vertical profiles would be more useful to constrain the surface flux and also to identify the local emission sources. Based on the degrees of freedom for signal, Kulawik et al. (2016, IWGGMS-12 presentation) shows that 2-layer information on the concentration of CO2 can be extracted from TANSO-FTS SWIR measurements, and the retrieval error is predicted to be about 5 ppm in the lower troposphere. In this study, we present multi-layer retrievals of CO2 and CH4 from a combined use of measurements of TANSO-FTS SWIR and TIR. We selected GOSAT observations at Railroad Valley Playa in Nevada, USA, which is a vicarious calibration site for TANSO-FTS, as we have various ancillary data including atmospheric temperature and humidity taken by a radiosonde, surface temperature, and surface emissivity with a ground based FTS. All of these data are useful especially for retrievals using TIR spectra. Currently, we use the 700-800 cm-1 and 1200-1300 cm-1 TIR windows for CO2 and CH4 retrievals, respectively, in addition to the SWIR bands. We found that by adding TIR windows, 3-layer information can be extracted, and the predicted retrieval error in the CO2 concentration was reduced about 1 ppm in the lower troposphere. We expect that the retrieval error could be further reduced by optimizing TIR windows and by reducing systematic forward model errors.
Effect on tricaine methanesulfonate (MS-222) on hematocrit values in rainbow trout (Salmo gairdneri)
Reinitz, G.L.; Rix, J.
1977-01-01
1. Anesthesia of rainbow trout (Salmo gairdneri) with 70 ppm tricaine methanesulfonate (MS-222) for 3-9 min resulted in a linear increase in hematocrit.2. Handling of unanesthetized trout caused a higher and more variable hematocrit reading than did exposure to MS-222 for up to 3 min.3. The range and standard error of hematocrit readings was smallest in trout treated with MS-222 for 1 min.
Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors
NASA Astrophysics Data System (ADS)
Herschtal, A.; te Marvelde, L.; Mengersen, K.; Hosseinifard, Z.; Foroudi, F.; Devereux, T.; Pham, D.; Ball, D.; Greer, P. B.; Pichler, P.; Eade, T.; Kneebone, A.; Bell, L.; Caine, H.; Hindson, B.; Kron, T.
2015-02-01
Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements.
NASA Astrophysics Data System (ADS)
Tavakol, Hossein; Esfandyari, Maryam; Taheri, Salman; Heydari, Akbar
2011-08-01
In this work, two important opioid antagonists, naltrexone and oxycodone, were prepared from thebaine and were characterized by IR, 1H NMR and 13C NMR spectroscopy. Moreover, computational NMR and IR parameters were obtained using density functional theory (DFT) at B3LYP/6-311++G** level of theory. Complete NMR and vibrational assignment were carried out using the observed and calculated spectra. The IR frequencies and NMR chemical shifts, determined experimentally, were compared with those obtained theoretically from DFT calculations, showed good agreements. The RMS errors observed between experimental and calculated data for the IR absorptions are 85 and 105 cm -1, for the 1H NMR peaks are 0.87 and 0.17 ppm and for those of 13C NMR are 5.6 and 5.3 ppm, respectively for naltrexone and oxycodone.
Respiratory responses of vigorously exercising children to 0. 12 ppm ozone exposure
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonnell, W.F. 3d.; Chapman, R.S.; Leigh, M.W.
1985-10-01
Changes in respiratory function have been suggested for children exposed to less than 0.12 ppm ozone (O3) while engaged in normal activities. Because the results of these studies have been confounded by other variables, such as temperature or the presence of other pollutants or have been questioned as to the adequacy of exposure measurements, the authors determined the acute response of children exposed to 0.12 ppm O3 in a controlled chamber environment. Twenty-three white males 8 to 11 yr of age were exposed once to clean air and once to 0.12 ppm O3 in random order. Exposures were for 2.5more » h and included 2 h of intermittent heavy exercise. Measures of forced expiratory volume in one second (FEV1) and the symptom cough were determined prior to and after each exposure. A significant decline in FEV1 was found after the O3 exposure compared to the air exposure, and it appeared to persist for 16 to 20 h. No significant increase in cough was found due to O3 exposure. Forced vital capacity, specific airways resistance, respiratory frequency, tidal volume, and other symptoms were measured in a secondary exploratory analysis of this study.« less
Gluten Contamination in Naturally or Labeled Gluten-Free Products Marketed in Italy.
Verma, Anil K; Gatti, Simona; Galeazzi, Tiziana; Monachesi, Chiara; Padella, Lucia; Baldo, Giada Del; Annibali, Roberta; Lionetti, Elena; Catassi, Carlo
2017-02-07
A strict and lifelong gluten-free diet is the only treatment of celiac disease. Gluten contamination has been frequently reported in nominally gluten-free products. The aim of this study was to test the level of gluten contamination in gluten-free products currently available in the Italian market. A total of 200 commercially available gluten-free products (including both naturally and certified gluten-free products) were randomly collected from different Italian supermarkets. The gluten content was determined by the R5 ELISA Kit approved by EU regulations. Gluten level was lower than 10 part per million (ppm) in 173 products (86.5%), between 10 and 20 ppm in 9 (4.5%), and higher than 20 ppm in 18 (9%), respectively. In contaminated foodstuff (gluten > 20 ppm) the amount of gluten was almost exclusively in the range of a very low gluten content. Contaminated products most commonly belonged to oats-, buckwheat-, and lentils-based items. Certified and higher cost gluten-free products were less commonly contaminated by gluten. Gluten contamination in either naturally or labeled gluten-free products marketed in Italy is nowadays uncommon and usually mild on a quantitative basis. A program of systematic sampling of gluten-free food is needed to promptly disclose at-risk products.
Gluten Contamination in Naturally or Labeled Gluten-Free Products Marketed in Italy
Verma, Anil K.; Gatti, Simona; Galeazzi, Tiziana; Monachesi, Chiara; Padella, Lucia; Baldo, Giada Del; Annibali, Roberta; Lionetti, Elena; Catassi, Carlo
2017-01-01
Background: A strict and lifelong gluten-free diet is the only treatment of celiac disease. Gluten contamination has been frequently reported in nominally gluten-free products. The aim of this study was to test the level of gluten contamination in gluten-free products currently available in the Italian market. Method: A total of 200 commercially available gluten-free products (including both naturally and certified gluten-free products) were randomly collected from different Italian supermarkets. The gluten content was determined by the R5 ELISA Kit approved by EU regulations. Results: Gluten level was lower than 10 part per million (ppm) in 173 products (86.5%), between 10 and 20 ppm in 9 (4.5%), and higher than 20 ppm in 18 (9%), respectively. In contaminated foodstuff (gluten > 20 ppm) the amount of gluten was almost exclusively in the range of a very low gluten content. Contaminated products most commonly belonged to oats-, buckwheat-, and lentils-based items. Certified and higher cost gluten-free products were less commonly contaminated by gluten. Conclusion: Gluten contamination in either naturally or labeled gluten-free products marketed in Italy is nowadays uncommon and usually mild on a quantitative basis. A program of systematic sampling of gluten-free food is needed to promptly disclose at-risk products. PMID:28178205
Particle Tracking on the BNL Relativistic Heavy Ion Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell, G. F.
1986-08-07
Tracking studies including the effects of random multipole errors as well as the effects of random and systematic multipole errors have been made for RHIC. Initial results for operating at an off diagonal working point are discussed.
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Francesconi, Carlos Fernando de Magalhães; Machado, Marta Brenner; Steinwurz, Flavio; Nones, Rodrigo Bremer; Quilici, Flávio Antonio; Catapani, Wilson Roberto; Miszputen, Sender Jankiel; Bafutto, Mauro
2016-01-01
Primary hypolactasia is a common condition where a reduced lactase activity in the intestinal mucosa is present. The presence of abdominal symptoms due to poor absorption of lactose, which are present in some cases, is a characteristic of lactose intolerance. Evaluate the efficacy of a product containing exogenous lactase in tablet form compared to a reference product with proven effectiveness in patients with lactose intolerance. Multicentre, randomized, parallel group, single-blind, comparative non-inferiority study. One hundred twenty-nine (129) adult lactose intolerance patients with hydrogen breath test results consistent with a diagnosis of hypolactasia were randomly assigned to receive the experimental product (Perlatte(r) - Eurofarma Laboratórios S.A.) or the reference product (Lactaid(r) - McNeilNutritionals, USA) orally (one tablet, three times per day) for 42 consecutive days. Data from 128 patients who actually received the studied treatments were analysed (66 were treated with the experimental product and 62 with the reference product). The two groups presented with similar baseline clinical and demographic data. Mean exhaled hydrogen concentration tested at 90 minutes after the last treatment (Day 42) was significantly lower in the experimental product treated group (17±18 ppm versus 34±47 ppm) in the per protocol population. The difference between the means of the two groups was -17 ppm (95% confidence interval [95% CI]: -31.03; -3.17). The upper limit of the 95% CI did not exceed the a priori non-inferiority limit (7.5 ppm). Secondary efficacy analyses confirmed that the treatments were similar (per protocol and intention to treat population). The tolerability was excellent in both groups, and there were no reports of serious adverse events related to the study treatment. The experimental product was non-inferior to the reference product, indicating that it was an effective replacement therapy for endogenous lactase in lactose intolerance patients.
Yamada, Kazuki; Endo, Hirosuke; Tetsunaga, Tomonori; Miyake, Takamasa; Sanki, Tomoaki; Ozaki, Toshifumi
2018-01-01
The accuracy of various navigation systems used for total hip arthroplasty has been described, but no publications reported the accuracy of cup orientation in computed tomography (CT)-based 2D-3D (two-dimensional to three-dimensional) matched navigation. In a prospective, randomized controlled study, 80 hips including 44 with developmental dysplasia of the hips were divided into a CT-based 2D-3D matched navigation group (2D-3D group) and a paired-point matched navigation group (PPM group). The accuracy of cup orientation (absolute difference between the intraoperative record and the postoperative measurement) was compared between groups. Additionally, multiple logistic regression analysis was performed to evaluate patient factors affecting the accuracy of cup orientation in each navigation. The accuracy of cup inclination was 2.5° ± 2.2° in the 2D-3D group and 4.6° ± 3.3° in the PPM group (P = .0016). The accuracy of cup anteversion was 2.3° ± 1.7° in the 2D-3D group and 4.4° ± 3.3° in the PPM group (P = .0009). In the PPM group, the presence of roof osteophytes decreased the accuracy of cup inclination (odds ratio 8.27, P = .0140) and the absolute value of pelvic tilt had a negative influence on the accuracy of cup anteversion (odds ratio 1.27, P = .0222). In the 2D-3D group, patient factors had no effect on the accuracy of cup orientation. The accuracy of cup positioning in CT-based 2D-3D matched navigation was better than in paired-point matched navigation, and was not affected by patient factors. It is a useful system for even severely deformed pelvises such as developmental dysplasia of the hips. Copyright © 2017 Elsevier Inc. All rights reserved.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
NASA Astrophysics Data System (ADS)
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Jeff D.; Wong, Raimond; Swaminath, Anand
Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less
Quantitative susceptibility mapping of human brain at 3T: a multisite reproducibility study.
Lin, P-Y; Chao, T-C; Wu, M-L
2015-03-01
Quantitative susceptibility mapping of the human brain has demonstrated strong potential in examining iron deposition, which may help in investigating possible brain pathology. This study assesses the reproducibility of quantitative susceptibility mapping across different imaging sites. In this study, the susceptibility values of 5 regions of interest in the human brain were measured on 9 healthy subjects following calibration by using phantom experiments. Each of the subjects was imaged 5 times on 1 scanner with the same procedure repeated on 3 different 3T systems so that both within-site and cross-site quantitative susceptibility mapping precision levels could be assessed. Two quantitative susceptibility mapping algorithms, similar in principle, one by using iterative regularization (iterative quantitative susceptibility mapping) and the other with analytic optimal solutions (deterministic quantitative susceptibility mapping), were implemented, and their performances were compared. Results show that while deterministic quantitative susceptibility mapping had nearly 700 times faster computation speed, residual streaking artifacts seem to be more prominent compared with iterative quantitative susceptibility mapping. With quantitative susceptibility mapping, the putamen, globus pallidus, and caudate nucleus showed smaller imprecision on the order of 0.005 ppm, whereas the red nucleus and substantia nigra, closer to the skull base, had a somewhat larger imprecision of approximately 0.01 ppm. Cross-site errors were not significantly larger than within-site errors. Possible sources of estimation errors are discussed. The reproducibility of quantitative susceptibility mapping in the human brain in vivo is regionally dependent, and the precision levels achieved with quantitative susceptibility mapping should allow longitudinal and multisite studies such as aging-related changes in brain tissue magnetic susceptibility. © 2015 by American Journal of Neuroradiology.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
NASA Astrophysics Data System (ADS)
Quatrevalet, M.; Ai, X.; Pérez-Serrano, A.; Adamiec, P.; Barbero, J.; Fix, A.; Rarity, J. G.; Ehret, G.; Esquivias, I.
2017-09-01
Carbon dioxide (CO2) is the major anthropogenic greenhouse gas contributing to global warming and climate change. Its concentration has recently reached the 400-ppm mark, representing a more than 40 % increase with respect to its level prior to the industrial revolution.
Robbins, C.S.
1975-01-01
Adult male northern bobwhite (Colinus virginianus) were fed diets containing organophosphorus pesticides, and the birds' discrimination acquisition and reversal performance was evaluated. The birds received the pesticide-laced diets continually, beginning 2 d before behavioral testing and ending after the birds completed the test series consisting of an acquisition and 10 reversals. Bobwhites fed a diet containing 0.18 ppm monocrotophos made 118% more errors (p 0.05) from that of controls; however, bobwhites fed the fenthion diet made 48% fewer errors (p < 0.05) in the reversals. When retested after 18 (monocrotophos) and 73 (fenthion) d on clean diets, no residual behavioral effects were detected. Brain cholinesterase activity was inhibited in all treatment groups.
Kreitzer, J.F.; Fleming, W.J.
1988-01-01
Adult male northern bobwhite (Colinus virginianus) were fed diets containing organophosphorus pesticides, and the birds' discrimination acquisition and reversal performance was evaluated. The birds received the pesticide-laced diets continually, beginning 2 d before behavioral testing and ending after the birds completed the test series consisting of an acquisition and 10 reversals. Bobwhites fed a diet containing 0.18 ppm monocrotophos made 118% more errors (p 0.05) from that of controls; however, bobwhites fed the fenthion diet made 48% fewer errors (p < 0.05) in the reversals. When retested after 18 (monocrotophos) and 73 (fenthion) d on clean diets, no residual behavioral effects were detected. Brain cholinesterase activity was inhibited in all treatment groups.
NASA Astrophysics Data System (ADS)
Guggino, S. N.; Hervig, R. L.
2010-12-01
Fluorine (F) is a volatile constituent of magmas and hydrous minerals, and trace amounts of F are incorporated into nominally anhydrous minerals such as olivine and clinopyroxene. Microanalytical techniques are routinely used to measure trace amounts of F at both high sensitivity and high spatial resolution in glasses and crystals. However, there are few well-established F concentrations for the glass standards routinely used in microanalytical laboratories, particularly standards of low silica, basaltic composition. In this study, we determined the F content of fourteen commonly used microanalytical glass standards of basaltic, intermediate, and rhyolitic composition. To serve as calibration standards, five basaltic glasses with ~0.2 to 2.5 wt% F were synthesized and characterized. A natural tholeiite from the East Pacific Rise was mixed with variable amounts of CaF2. The mixture was heated in a 1 atmosphere furnace to 1440 °C at fO2 = NNO for 30 minutes and quenched in water. Portions of the run products were studied by electron probe microanalysis (EPMA) and secondary ion mass spectrometry (SIMS). The EPMA used a 15 µm diameter defocused electron beam with a 15 kV accelerating voltage and a 25 nA primary current, a TAP crystal for detecting FKα X-rays, and Biotite 3 as the F standard. The F contents by EPMA agreed with the F added to the basalts after correction for mass loss during melting. The SIMS analyses used a primary beam of 16O- and detection of low-energy negative ions (-5 kV) at a mass resolution that resolved 18OH. Both microanalytical techniques confirmed homogeneity, and the SIMS calibration defined by EPMA shows an excellent linear trend with backgrounds of 2 ppm or less. Analyses of basaltic glass standards based on our synthesized calibration standards gave the following F contents and 2σ errors (ppm): ALV-519 = 83 ± 3; BCR-2G = 359 ± 6; BHVO-2G = 322 ± 15; GSA-1G = 10 ± 1; GSC-1G = 11 ± 1; GSD-1G = 19 ± 2; GSE-1G = 173 ± 1; KL2G (MPI-DING) = 101 ± 1; ML3B-G (MPI-DING) = 49 ± 17. These values are lower than published values for BCR-2 and BHVO-2 (unmelted powders) and the “information values” for the MPI-DING glass standards. Proton Induced Gamma ray Emission (PIGE) was tested for the high silica samples. PIGE analyses (1.7 MeV Tandem Accelerator; reaction type: 19F(p, αγ)16O; primary current = 20-30 nA; incident beam voltage = 1.5 MeV) were calibrated with a crystal of fluor-topaz (F = 20.3 wt%) and gave F values of: NIST 610 = 266 ± 14 ppm; NIST 620 = 54 ± 5 ppm; and UTR-2 = 1432 ± 32 ppm. SIMS calibration defined by the PIGE analyses shows an excellent linear trend with low background similar to the basaltic calibration. The F concentrations of intermediate MPI-DING glasses were determined based on SIMS calibration generated from the PIGE analysis above. The F concentrations and 2σ errors (ppm) are: T1G = 219.9 ± 6.8; StHs/680-G = 278.0 ± 2.0 ppm. This study revealed a large matrix effect between the high-silica and basaltic glasses, thus requiring the use of appropriate standards and separate SIMS calibrations when analyzing samples of different compositions.
What errors do peer reviewers detect, and does training improve their ability to detect them?
Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard
2008-10-01
To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
NASA Astrophysics Data System (ADS)
Boyce, J. W.; Hodges, K. V.
2001-12-01
Despite the lack of an official pronouncement, the fluorapatite of Cerro de Mercado, Durango, Mexico has become the de facto standard for (U-Th)/He geochronology. In addition to being relatively inclusion-free and easily obtained, these crystals are commonly in excess of 5mm in diameter, permitting the removal of the outer skin of the crystal, theoretically eliminating the alpha-ejection correction. However, bulk analyses of the Durango fluorapatite indicate a substantial variation in U and Th concentrations from aliquot to aliquot (167-238 ppm Th; 9.7-12.3 ppm U, [1]). If similar variations were to occur on the sub-grain scale, small fragments of single crystals could contain helium excesses or deficiencies due to alpha-ejection exchange between zones with varying parent element content. We have performed a series of experiments to quantify the intra-grain variation in U and Th, in order to model the effect of this variation on ages determined on Durango fluorapatite. X-ray maps show concentric zonation in U and Th, with similar, but more apparently pronounced zonation in Si and Cl. Preliminary laser-ablation ICPMS data indicate, not surprisingly, that intra-grain variations in U and Th concentrations obtained by analysis of ~35 μ m spots are larger than that which had been previously obtained by bulk analytical techniques (with overall concentrations greater than for bulk analyses). Thus far, analyses yield U concentrations varying from 11 to 16 ppm, and Th concentrations ranging from 220 to 340 ppm. Modeling underway suggests that parent element variations on the order of 50%, such as those observed, and the resulting differential alpha-exchange could lead to several percent error in age, for ~100 μ m fragments. The effect scales inversely with fragment size, with 300 μ m fragments (roughly the size of a large, single grain analysis) having only ~1% error. This may offer an explanation for the previously observed inability to reproduce ages for the Durango fluorapatite within theoretical uncertainty [2]. [1] Young, E.J. et al., 1969. Mineralogy and geochemistry of fluorapatite from Cerro de Mercado, Durango, Mexico. USGS Professional Paper 650-D, pp D84-D93. [2] House, M.A. et al, 2000. Helium chronometry of apatite and titanite using Nd-YAG laser heating. Earth and Planetary Science Letters, v. 183, pp 365-368.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
Error threshold for color codes and random three-body Ising models.
Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A
2009-08-28
We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
ERIC Educational Resources Information Center
Quarm, Daisy
1981-01-01
Findings for couples (N=119) show wife's work, money, and spare time low between-spouse correlations are due in part to random measurement error. Suggests that increasing reliability of measures by creating multi-item indices can also increase correlations. Car purchase, vacation, and child discipline were not accounted for by random measurement…
Coding for Communication Channels with Dead-Time Constraints
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2004-01-01
Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.
Schmidt, M A; Wells, E J; Davison, K; Riddell, A M; Welsh, L; Saran, F
2017-02-01
MRI is a mandatory requirement to accurately plan Stereotactic Radiosurgery (SRS) for Vestibular Schwannomas. However, MRI may be distorted due not only to inhomogeneity of the static magnetic field and gradients but also due to susceptibility-induced effects, which are more prominent at higher magnetic fields. We assess geometrical distortions around air spaces and consider MRI protocol requirements for SRS planning at 3 T. Hardware-related distortion and the effect of incorrect shimming were investigated with structured test objects. The magnetic field was mapped over the head on five volunteers to assess susceptibility-related distortion in the naso-oro-pharyngeal cavities (NOPC) and around the internal ear canal (IAC). Hardware-related geometric displacements were found to be less than 0.45 mm within the head volume, after distortion correction. Shimming errors can lead to displacements of up to 4 mm, but errors of this magnitude are unlikely to arise in practice. Susceptibility-related field inhomogeneity was under 3.4 ppm, 2.8 ppm, and 2.7 ppm for the head, NOPC region and IAC region, respectively. For the SRS planning protocol (890 Hz/pixel, approximately 1 mm 3 isotropic), susceptibility-related displacements were less than 0.5 mm (head), and 0.4 mm (IAC and NOPC). Large displacements are possible in MRI examinations undertaken with lower receiver bandwidth values, commonly used in clinical MRI. Higher receiver bandwidth makes the protocol less vulnerable to sub-optimal shimming. The shimming volume and the CT-MR co-registration must be considered jointly. Geometric displacements can be kept under 1 mm in the vicinity of air spaces within the head at 3 T with appropriate setting of the receiver bandwidth, correct shimming and employing distortion correction. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Burger, Joanna
2014-01-01
Relatively little attention has been devoted to the risks from mercury in saltwater fish, that were caught by recreational fisherfolk. Although the US Food and Drug Administration has issued advisories based on mercury for four saltwater species or groups of fish, there are few data on how mercury levels vary by size, season, or location. This paper examines total mercury levels in muscle of bluefish (Pomatomus saltatrix) collected from coastal New Jersey, mainly by recreational fishermen. Of primary interest was whether there were differences in mercury levels as a function of location, weight and length of the fish, and season, and in what risk mercury posed to the food chain, including people. Selenium was also measured because of its reported protective effects against mercury. Mercury levels averaged 0.35±0.02 (mean and standard error) ppm, and selenium levels averaged 0.37±0.01ppm (N = 206). In this study, 41% of the fish had mercury levels above 0.3 ppm, 20% had levels above 0.5 ppm, and 4% had levels above 1 ppm. Size was highly correlated with mercury levels, but not with selenium. While selenium levels did not vary at all with season, mercury levels decreased significantly. This relationship was not due to differences in the size of fish, since the fish collected in the summer were the smallest, but had intermediate mercury levels. Mercury levels declined from early June until November, particularly for the smaller-sized fish. While there were significant locational differences in mercury levels (but not selenium), these differences could be a result of size. The levels of mercury in bluefish are not sufficiently high to cause problems for the bluefish themselves, based on known adverse health effects levels, but are high enough to cause potential adverse health effects in sensitive birds and mammals that eat them, and to provide a potential health risk to humans who consume them. Fish larger than 50cm fork length averaged levels above 0.3 ppm, suggesting that eating them should be avoided by pregnant women, children, and others who are at risk. PMID:19643400
Burger, Joanna
2009-10-01
Relatively little attention has been devoted to the risks from mercury in saltwater fish, that were caught by recreational fisherfolk. Although the US Food and Drug Administration has issued advisories based on mercury for four saltwater species or groups of fish, there are few data on how mercury levels vary by size, season, or location. This paper examines total mercury levels in muscle of bluefish (Pomatomus saltatrix) collected from coastal New Jersey, mainly by recreational fishermen. Of primary interest was whether there were differences in mercury levels as a function of location, weight and length of the fish, and season, and in what risk mercury posed to the food chain, including people. Selenium was also measured because of its reported protective effects against mercury. Mercury levels averaged 0.35+/-0.02 (mean and standard error)ppm, and selenium levels averaged 0.37+/-0.01ppm (N=206). In this study, 41% of the fish had mercury levels above 0.3ppm, 20% had levels above 0.5ppm, and 4% had levels above 1ppm. Size was highly correlated with mercury levels, but not with selenium. While selenium levels did not vary at all with season, mercury levels decreased significantly. This relationship was not due to differences in the size of fish, since the fish collected in the summer were the smallest, but had intermediate mercury levels. Mercury levels declined from early June until November, particularly for the smaller-sized fish. While there were significant locational differences in mercury levels (but not selenium), these differences could be a result of size. The levels of mercury in bluefish are not sufficiently high to cause problems for the bluefish themselves, based on known adverse health effects levels, but are high enough to cause potential adverse health effects in sensitive birds and mammals that eat them, and to provide a potential health risk to humans who consume them. Fish larger than 50cm fork length averaged levels above 0.3ppm, suggesting that eating them should be avoided by pregnant women, children, and others who are at risk.
Yazar, K; Lundov, M D; Faurschou, A; Matura, M; Boman, A; Johansen, J D; Lidén, C
2015-07-01
In recent years, the prevalence of contact allergy to the preservative methylisothiazolinone (MI) has increased dramatically. Cosmetic products are one of the major sources of exposure. To examine whether allowed concentrations of MI in cosmetic rinse-off products have the potential to cause allergic contact dermatitis. Nineteen MI-allergic subjects and 19 controls without MI allergy applied two liquid hand soaps five times per day on areas of 5 × 10 cm(2) on the ventral side of their forearms. One soap contained 100 ppm MI, the maximum allowed concentration in cosmetics, and was used by 10 allergic subjects and all controls. Another liquid soap with 50 ppm MI was used by nine allergic subjects. As the negative control, all subjects used a similar soap that did not contain MI. The repeated open applications proceeded until a positive reaction occurred or up to 21 days. The study was conducted in a randomized and blinded fashion. Ten out of 10 MI-allergic subjects developed positive reactions to the soap with 100 ppm and seven out of nine reacted to the 50 ppm soap, while none of the 19 controls had a positive reaction during 21 days of application. No reactivity was seen to the soap without MI. The difference in reactivity to MI between MI-allergic subjects and controls was statistically significant (Fisher's exact test, P ˂ 0.0001). Rinse-off products preserved with 50 ppm MI or more are not safe for consumers. No safe level has yet been identified. © 2015 British Association of Dermatologists.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
One-step random mutagenesis by error-prone rolling circle amplification
Fujii, Ryota; Kitaoka, Motomitsu; Hayashi, Kiyoshi
2004-01-01
In vitro random mutagenesis is a powerful tool for altering properties of enzymes. We describe here a novel random mutagenesis method using rolling circle amplification, named error-prone RCA. This method consists of only one DNA amplification step followed by transformation of the host strain, without treatment with any restriction enzymes or DNA ligases, and results in a randomly mutated plasmid library with 3–4 mutations per kilobase. Specific primers or special equipment, such as a thermal-cycler, are not required. This method permits rapid preparation of randomly mutated plasmid libraries, enabling random mutagenesis to become a more commonly used technique. PMID:15507684
Real-time quantitative analysis of H2, He, O2, and Ar by quadrupole ion trap mass spectrometry.
Ottens, Andrew K; Harrison, W W; Griffin, Timothy P; Helms, William R
2002-09-01
The use of a quadrupole ion trap mass spectrometer (QITMS) for quantitative analysis of hydrogen and helium as well as of other permanent gases is demonstrated. Like commercial instruments, the customized QITMS uses mass selective instability; however, this instrument operates at a greater trapping frequency and without a buffer gas. Thus, a useable mass range from 2 to over 50 daltons (Da) is achieved. The performance of the ion trap is evaluated using part-per-million (ppm) concentrations of hydrogen, helium, oxygen, and argon mixed into a nitrogen gas stream, as outlined by the National Aeronautics and Space Administration (NASA), which is interested in monitoring for cryogenic fuel leaks within the Space Shuttle during launch preparations. When quantitating the four analytes, relative accuracy and precision were better than the NASA-required minimum of 10% error and 5% deviation, respectively. Limits of detection were below the NASA requirement of 25-ppm hydrogen and 100-ppm helium; those for oxygen and argon were within the same order of magnitude as the requirements. These results were achieved at a fast data recording rate, and demonstrate the utility of the QITMS as a real-time quantitative monitoring device for permanent gas analysis. c. 2002 American Society for Mass Spectrometry.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
Universal Decoder for PPM of any Order
NASA Technical Reports Server (NTRS)
Moision, Bruce E.
2010-01-01
A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.
Ocular response to hydrogen peroxide.
Paugh, J R; Brennan, N A; Efron, N
1988-02-01
A controlled, randomized, double-masked study was conducted on eight human subjects to determine the threshold level of hydrogen peroxide, which is toxic when introduced into the eye via a high water content (75%; Durasoft 4) hydrogel contact lens. Subjective comfort, conjunctival hyperemia, corneal and conjunctival epithelial staining, and corneal oxygen uptake were assessed in response to 5-min wear of lenses that were presoaked in isotonic saline solutions of physiologic pH containing 0, 25, 50, 100, 200, 400, and 800 parts per million (ppm) hydrogen peroxide. Higher levels of hydrogen peroxide were associated with greater discomfort (p less than 0.05) and increased conjunctival hyperemia (p less than 0.001). The highest level of hydrogen peroxide tested (800 ppm) did not induce significant corneal or conjunctival epithelial staining or alter the corneal aerobic response. We conclude that residual concentrations of hydrogen peroxide in contact lens care systems should not exceed 100 ppm. Practitioners can use these data to estimate the level of residual hydrogen peroxide to which a patient may have been exposed upon lens application after neutralization.
Diagnostic brain residues of dieldrin: Some new insights
Heinz, G.H.; Johnson, R.W.; Lamb, D.W.; Kenaga, E.E.
1981-01-01
Forty adult male cowbirds were fed a diet containing 20 ppm dieldrin; 20 of the birds were randomly selected to die from dieldrin poisoning and 20 were sacrificed when dieldrin had made them too sick to eat. An average of 6.8 ppm dieldrin (range of 1.51 to 11.7) in the brain on a wet-weight basis was associated with a treatment-related cessation of feeding, whereas an average of 16.3 ppm (range of 9.84 to 23.5) was found in the brains of birds that died from dieldrin poisoning; the latter concentrations agreed with those determined in other studies. Dieldrin-induced starvation was generally irreversible; therefore, brain levels of dieldrin that are clearly sublethal may nevertheless present a grave hazard to birds by initiating a process that leads to death. Fatter cowbirds were able to survive longer on dieldrin treatment but contained brain residues similar to those in cowbirds that died sooner. Some cowbirds survived for 2 months or longer with unexpectedly large amounts of body fat remaining when they died or were sacrificed. Fatter cowbirds also survived longer after they had stopped eating.
QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES
The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...
Agouropoulos, A; Twetman, S; Pandis, N; Kavvadia, K; Papagiannoulis, L
2014-10-01
To evaluate the effect of biannual fluoride varnish applications in preschool children as an adjunct to school-based oral health promotion and supervised tooth brushing with 1000ppm fluoride toothpaste. 424 preschool children, 2-5 year of age, from 10 different pre schools in Athens were invited to this double-blind randomized controlled trial and 328 children completed the 2-year programme. All children received oral health education with hygiene instructions twice yearly and attended supervised tooth brushing once daily. The test group was treated with fluoride varnish (0.9% diflurosilane) biannually while the control group had placebo applications. The primary endpoints were caries prevalence and increment; secondary outcomes were gingival health, mutans streptococci growth and salivary buffer capacity. The groups were balanced at baseline and no significant differences in caries prevalence or increment were displayed between the groups after 1 and 2 years, respectively. There was a reduced number of new pre-cavitated enamel lesions during the second year of the study (p=0.05) but the decrease was not statistically significant. The secondary endpoints were unaffected by the varnish treatments. Under the present conditions, biannual fluoride varnish applications in preschool children did not show significant caries-preventive benefits when provided as an adjunct to school-based supervised tooth brushing with 1000ppm fluoride toothpaste. In community based, caries prevention programmes, for high caries risk preschool children, a fluoride varnish may add little to caries prevention, when 1000ppm fluoride toothpaste is used daily. Copyright © 2014 Elsevier Ltd. All rights reserved.
Random Versus Nonrandom Peer Review: A Case for More Meaningful Peer Review.
Itri, Jason N; Donithan, Adam; Patel, Sohil H
2018-05-10
Random peer review programs are not optimized to discover cases with diagnostic error and thus have inherent limitations with respect to educational and quality improvement value. Nonrandom peer review offers an alternative approach in which diagnostic error cases are targeted for collection during routine clinical practice. The objective of this study was to compare error cases identified through random and nonrandom peer review approaches at an academic center. During the 1-year study period, the number of discrepancy cases and score of discrepancy were determined from each approach. The nonrandom peer review process collected 190 cases, of which 60 were scored as 2 (minor discrepancy), 94 as 3 (significant discrepancy), and 36 as 4 (major discrepancy). In the random peer review process, 1,690 cases were reviewed, of which 1,646 were scored as 1 (no discrepancy), 44 were scored as 2 (minor discrepancy), and none were scored as 3 or 4. Several teaching lessons and quality improvement measures were developed as a result of analysis of error cases collected through the nonrandom peer review process. Our experience supports the implementation of nonrandom peer review as a replacement to random peer review, with nonrandom peer review serving as a more effective method for collecting diagnostic error cases with educational and quality improvement value. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Singh, Pankaj; Singh, Manoj Kumar; Kumar, Vipin; Kumar, Mukesh; Malik, Sunil
2012-03-01
An experiment was done to assess the effect of various physico-chemical treatments on ripening behavior and post harvest quality of mango cv. Amrapali. The experiment was planned under completely randomized design (CRD) with three replications. The treatment units was five fruits per replication. Total 14 treatments were applied. Out of these, ethrel 750 ppm treated fruits showed better results in respect of specific gravity (0.88), moisture loss (8.45%), decay (2.5%), total soluble solids (TSS, 20.7 degrees brix), sugar content (14.39%) and acidity content (0.32) followed by ethrel 500 ppm; specific gravity (0.90), moisture loss (8.82%), decay (3.5%), TSS (20.7 degrees brix), sugar content (13.99%) and acidity content (0.36%). The pedicellate fruits and ethrel+bavistin (750+1000 ppm) were also found to be significantly superior over control in respect of specific gravity (0.88 and 0.86), moisture loss (9.10 and 9.33%), decay (4.0 and 5.33%), TSS (20.1 and 20.4 degrees brix), sugar content (12.70 and 12.80%) and acidity content (0.42 and 0.38%), respectively. Based on results of this study, it can be concluded that ethrel 750 ppm was found to be the most suitable treatment in improving physico-chemical traits i.e. ripening, storage, quality and shelf-life for commercial purpose in mango.
Gooding, M. A.; Minikhiem, D. L.
2016-01-01
L-carnitine (LC) is included in select adult feline diets for weight management. This study investigated whether feeding adult cats with diets containing either 188 ppm of LC (LC188) or 121 ppm of LC (LC121) and feeding them 120% of maintenance energy requirement (MER) resulted in differences in total energy expenditure (EE), metabolic fuel selection, BW, body composition, and behavior. Cats (n = 20, 4 ± 1.2 yrs) were stratified for BCS and randomly assigned to one of two dietary treatments and fed for 16 weeks. BW was measured weekly, and indirect calorimetry, body composition, physical activity, play motivation, and cognition were measured at baseline and throughout the study. A mixed, repeated measures, ANCOVA model was used. Cats in both treatments gained BW (P < 0.05) throughout the study, with no differences between treatments at any time point (P > 0.05). There were no differences in body composition between groups at baseline; however, body fat (g) and body fat : lean mass ratio were greater in cats fed LC121 in contrast to cats fed LC188 (P < 0.05) on week 16. No other outcomes differed between treatments (P > 0.05). Supplying dietary LC at a dose of at least 188 ppm may be beneficial for the health and well-being of cats fed above MER. PMID:27652290
NASA Astrophysics Data System (ADS)
Muktiani, A.; Kusumanti, E.; Harjanti, D. W.
2018-02-01
This study aimed to investigate the effect of different protein and energy contents and supplementation ofZinc (Zn), Chromium (Cr) and grapes seed oil on milk production in dairy goat.Randomized block design(RBD) was used in this study. Twelve lactating Ettawah crossbreed goats divided into three groupsbased on milk production. The treatment ration were:T1 = ration containing 16% CPand 66% TDN; T2 = ration containing 14% CP and 63% TDN supplemented with Zn 20 ppm + Cr 2 ppm; and T3 = T2 + 22 ml grapes seed oil/head/day. The ration wasa dry complete feed in the form of pellet. The feed ingredients used were rice bran, cassava, wheat pollard, soybean meal, coconut meal, molasses, coffee husk and corn straw.Experiment was conducted over 30 days. Results revealed that the goat fed the ration supplemented with Zn and Cr(T2) producehigher milk yield (1012.29 g/day) and better in milk fat production (P<0.05)compared to those fedT1 and T3. Feed intake was decreased in treatment supplemented with grapes seed oil in the T3 (P<0.05), but no significant in milk fat production compared with T1. In conclusion,drycomplete feed containingof CP 14%, TDN 63% supplemented with Zn 20 ppm + Cr 2ppm is recommended for lactating dairy goat.
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
ERIC Educational Resources Information Center
Byun, Tara McAllister
2017-01-01
Purpose: This study documented the efficacy of visual-acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of…
Statistical Analysis Experiment for Freshman Chemistry Lab.
ERIC Educational Resources Information Center
Salzsieder, John C.
1995-01-01
Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsipoura, Nellie; Burger, Joanna; Environmental and Occupational Health Sciences Institute, Piscataway, NJ 08854
2008-06-15
The New Jersey Meadowlands is an important natural area, a diverse mosaic of wetland habitats positioned within the heavily urbanized NY City Metropolitan area and the NY/NJ Harbor. Persistent contaminants may pose threats to wildlife inhabiting these habitats, affecting reproduction, egg hatchability, nestling survivorship, and neurobehavioral development. Metals of concern in the Meadowlands include arsenic, cadmium, chromium, lead, and mercury. These metals were analyzed in feathers and blood of three passerine birds breeding in wetland habitats, including red-winged blackbirds (Agelaius phoeniceus), marsh wrens (Cistothorus palustris), and tree swallow (Tachycineta bicolor), as well as eggs of the first two species. Thesemore » widespread species are abundant in wetland habitats across the Meadowlands District, and eat insects and other invertebrates. Lead levels were low in eggs, higher in feathers and very elevated in blood in all species compared to those that have been reported for other bird species. Lead levels were especially high in blood of marsh wren (mean of 0.8 ppm) and swallow (mean of 0.94 ppm, wet weight). Levels of lead in the blood for all three species sampled were higher than the negative impact threshold of 0.4 ppm. Mercury levels, while below the levels considered biologically harmful, were higher in eggs (mean of 0.2, wet weight) and feathers (3.2 ppm, dry weight) of marsh wren from Meadowlands than those seen in other passerines, and even some fish-eating birds. Furthermore, unhatched wren eggs had higher mercury levels (0.3 ppm, wet weight) than eggs randomly selected before hatch (0.18 ppm, wet weight). Blood tissue levels of mercury were low in all three species (mean of less than 0.035 ppm, wet weight). Chromium levels were relatively high in eggs and in blood, but lower in feathers when compared to those reported in the literature. Cadmium and arsenic levels were generally low for all tissues and in all species studied compared to those measured in other studies. Finally, all metal levels for tree swallow tissues in our study were much lower than those reported previously for this species in the Meadowlands District.« less
Lowe, James
2018-01-01
A high reactivity and leaving no harmful residues make ozone an effective disinfectant for farm hygiene and biosecurity. Our objectives were therefore to (1) characterize the killing capacity of aqueous and gaseous ozone at different operational conditions on dairy cattle manure-based pathogens (MBP) contaminated different surfaces (plastic, metal, nylon, rubber, and wood); (2) determine the effect of microbial load on the killing capacity of aqueous ozone. In a crossover design, 14 strips of each material were randomly assigned into 3 groups, treatment (n = 6), positive-control (n = 6), and negative-control (n = 2). The strips were soaked in dairy cattle manure with an inoculum level of 107–108 for 60 minutes. The treatment strips were exposed to aqueous ozone of 2, 4, and 9 ppm and gaseous ozone of 1and 9 ppm for 2, 4, and 8 minutes exposure. 3M™ Petrifilm™ rapid aerobic count plate and plate reader were used for bacterial culture. On smooth surfaces, plastic and metal, aqueous ozone at 4 ppm reduced MBP to a safe level (≥5-log10) within 2 minutes (6.1 and 5.1-log10, respectively). However, gaseous ozone at 9 ppm for 4 minutes inactivated 3.3-log10 of MBP. Aqueous ozone of 9 ppm is sufficient to reduce MBP to a safe level, 6.0 and 5.4- log10, on nylon and rubber surfaces within 2 and 8 minutes, respectively. On complex surfaces, wood, both aqueous and gaseous ozone at up to 9 ppm were unable to reduce MBP to a safe level (3.6 and 0.8-log10, respectively). The bacterial load was a strong predictor for reduction in MBP (P<0.0001, R2 = 0.72). We conclude that aqueous ozone of 4 and 9 ppm for 2 minutes may provide an efficient method to reduce MBP to a safe level on smooth and moderately rough surfaces, respectively. However, ozone alone may not an adequate means of controlling MBP on complex surfaces. PMID:29758045
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
Analysis of Aerosols and Fallout from High-Explosive Dust Clouds. Volume 2
1977-03-01
to the situation at hand, where S is an absolute error, AIV- 30 Rio • N(i, - N(i,2) It is to be noted, however, that the Poisson formula specifies the...sphere TNT detonation near Grand Junction, Colorado, on November 13, 1972. Data from the resulting dust cloud was collected by two aircraft and includes...variations. rio Measurements of carbon monoxide were inconclusive due to an unusually high noise level (6 to 8 ppm or considerably higher than the carbon
Modified zirconium-eriochrome cyanine R determination of fluoride
Thatcher, L.L.
1957-01-01
The Eriochrome Cyanine R method for determining fluoride in natural water has been modified to provide a single, stable reagent solution, eliminate interference from oxidizing agents, extend the concentration range to 3 p.p.m., and extend the phosphate tolerance. Temperature effect was minimized; sulfate error was eliminated by precipitation. The procedure is sufficiently tolerant to interferences found in natural and polluted waters to permit the elimination of prior distillation for most samples. The method has been applied to 500 samples.
LeBlanc, Mallory; Allen, Joseph G; Herrick, Robert F; Stewart, James H
2018-03-01
The Advanced Reach Tool V1.5 (ART) is a mathematical model for occupational exposures conceptually based on, but implemented differently than, the "classic" Near Field/Far Field (NF/FF) exposure model. The NF/FF model conceptualizes two distinct exposure "zones"; the near field, within approximately 1m of the breathing zone, and the far field, consisting of the rest of the room in which the exposure occurs. ART has been reported to provide "realistic and reasonable worst case" estimates of the exposure distribution. In this study, benzene exposure during the use of a metal parts washer was modeled using ART V1.5, and compared to actual measured workers samples and to NF/FF model results from three previous studies. Next, the exposure concentrations expected to be exceeded 25%, 10% and 5% of the time for the exposure scenario were calculated using ART. Lastly, ART exposure estimates were compared with and without Bayesian adjustment. The modeled parts washing benzene exposure scenario included distinct tasks, e.g. spraying, brushing, rinsing and soaking/drying. Because ART can directly incorporate specific types of tasks that are part of the exposure scenario, the present analysis identified each task's determinants of exposure and performance time, thus extending the work of the previous three studies where the process of parts washing was modeled as one event. The ART 50th percentile exposure estimate for benzene (0.425ppm) more closely approximated the reported measured mean value of 0.50ppm than the NF/FF model estimates of 0.33ppm, 0.070ppm or 0.2ppm obtained from other modeling studies of this exposure scenario. The ART model with the Bayesian analysis provided the closest estimate to the measured value (0.50ppm). ART (with Bayesian adjustment) was then used to assess the 75th, the 90th and 95th percentile exposures, predicting that on randomly selected days during this parts washing exposure scenario, 25% of the benzene exposures would be above 0.70ppm; 10% above 0.95ppm; and 5% above 1.15ppm. These exposure estimates at the three different percentiles of the ART exposure distribution refer to the modeled exposure scenario not a specific workplace or worker. This study provides a detailed comparison of modeling tools currently available to occupational hygienists and other exposure assessors. Possible applications are considered. Copyright © 2017 Elsevier GmbH. All rights reserved.
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
Goni, Leticia; Qi, Lu; Cuervo, Marta; Milagro, Fermín I; Saris, Wim H; MacDonald, Ian A; Langin, Dominique; Astrup, Arne; Arner, Peter; Oppert, Jean-Michel; Svendstrup, Mathilde; Blaak, Ellen E; Sørensen, Thorkild Ia; Hansen, Torben; Martínez, J Alfredo
2017-09-01
Background: Circulating branched-chain amino acids (BCAAs) and aromatic amino acids (AAAs) have been shown to be associated with insulin resistance and diabetes risk. The common rs1440581 T allele in the protein phosphatase Mg2+/Mn2+ dependent 1K ( PPM1K ) gene has been related to elevated BCAA concentrations and risk of type 2 diabetes. Objective: In the present study, we tested whether dietary fat and carbohydrate intakes influenced the association between the rs1440581 PPM1K genetic variant and glucose-metabolism traits during weight loss. Design: The rs1440581 PPM1K genetic variant was genotyped in a total of 757 nondiabetic individuals who were randomly assigned to 1 of 2 energy-restricted diets that differed in macronutrient composition (low-fat diet: 20-25% fat, 15% protein, and 60-65% carbohydrate; high-fat diet: 40-45% fat, 15% protein, and 40-45% carbohydrate). The changes in fasting glucose, fasting insulin, insulin resistance (homeostasis model assessment of insulin resistance) and homeostasis model assessment of β cell function (HOMA-B) were measured after a mean ± SD weight loss of 6.8 ± 3.4 kg over 10 wk and analyzed according to the presence of the T allele of rs1440581. Results: The rs1440581 T allele was associated with a smaller improvement in glucose concentrations after the 10-wk dietary intervention (β ± SE: 0.05 ± 0.02 mg/dL; P = 0.03). In addition, significant gene-diet interactions were shown for the rs1440581 PPM1K genetic variant in relation to changes in insulin and HOMA-B ( P -interaction = 0.006 and 0.002, respectively). In response to the high-fat diet, the T allele was associated with a higher reduction of insulin (β ± SE: -0.77 ± 0.40 μU/mL; P = 0.04) and HOMA-B (β ± SE: -13.2 ± 3.81; P = 0.003). An opposite effect was observed in the low-fat diet group, although in this group the T allele was marginally ( P = 0.10) and not significantly ( P = 0.24) associated with insulin and HOMA-B, respectively. Conclusion: PPM1K rs1440581 may affect changes in glucose metabolism during weight loss, and this effect is dependent on dietary fat and carbohydrate intakes. This trial was registered at controlled-trials.com as ISRCTN25867281. © 2017 American Society for Nutrition.
Slavov, Svetoslav H; Wilkes, Jon G; Buzatu, Dan A; Kruhlak, Naomi L; Willard, James M; Hanig, Joseph P; Beger, Richard D
2014-12-01
Modified 3D-SDAR fingerprints combining (13)C and (15)N NMR chemical shifts augmented with inter-atomic distances were used to model the potential of chemicals to induce phospholipidosis (PLD). A curated dataset of 328 compounds (some of which were cationic amphiphilic drugs) was used to generate 3D-QSDAR models based on tessellations of the 3D-SDAR space with grids of different density. Composite PLS models averaging the aggregated predictions from 100 fully randomized individual models were generated. On each of the 100 runs, the activities of an external blind test set comprised of 294 proprietary chemicals were predicted and averaged to provide composite estimates of their PLD-inducing potentials (PLD+ if PLD is observed, otherwise PLD-). The best performing 3D-QSDAR model utilized a grid with a density of 8ppm×8ppm in the C-C region, 8ppm×20ppm in the C-N region and 20ppm×20ppm in the N-N region. The classification predictive performance parameters of this model evaluated on the basis of the external test set were as follows: accuracy=0.70, sensitivity=0.73 and specificity=0.66. A projection of the most frequently occurring bins on the standard coordinate space suggested a toxicophore composed of an aromatic ring with a centroid 3.5-7.5Å distant from an amino-group. The presence of a second aromatic ring separated by a 4-5Å spacer from the first ring and at a distance of between 5.5Å and 7Å from the amino-group was also associated with a PLD+ effect. These models provide comparable predictive performance to previously reported models for PLD with the added benefit of being based entirely on non-confidential, publicly available training data and with good predictive performance when tested in a rigorous, external validation exercise. Published by Elsevier Ltd.
Chloroatranol, an extremely potent allergen hidden in perfumes: a dose-response elicitation study.
Johansen, Jeanne Duus; Andersen, Klaus Ejner; Svedman, Cecilia; Bruze, Magnus; Bernard, Guillaume; Giménez-Arnau, Elena; Rastogi, Suresh Chandra; Lepoittevin, Jean-Pierre; Menné, Torkil
2003-10-01
Oak moss absolute is a long-known, popular natural extract widely used in perfumes. It is reported as the cause of allergic reactions in a significant number of those with perfume allergy. Oak moss absolute has been the target of recent research to identify its allergenic components. Recently, chloroatranol, a hitherto unknown fragrance allergen, was identified in oak moss absolute. The objective was to assess the clinical importance of chloroatranol as a fragrance allergen by characterizing its elicitation profile. 13 patients previously showing a positive patch test to oak moss absolute and chloroatranol were included, together with a control group of 10 patients without sensitization to either of the 2 materials. A serial dilution patch test was performed on the upper back with concentrations ranging from 200 to 0.0063 p.p.m. of chloroatranol in ethanol. Simultaneously, the participant performed an open test simulating the use of perfumes on the volar aspect of the forearms in a randomized and double-blinded design. A solution with 5 p.p.m. chloroatranol was used for 14 days, and, in case of no reaction, the applications were continued for another 14 days with a solution containing 25 p.p.m. All test subjects (13/13) developed an allergic reaction at the site of application of the solution containing chloroatranol. Among them, 12/13 (92%) gave a positive reaction to the 5 p.p.m. solution and 1 to 25 p.p.m. None of the controls reacted (P < 0.001). The use test was terminated at median day 4. The dose eliciting a reaction in 50% of the test subjects at patch testing was 0.2 p.p.m. In conclusion, the hidden exposure to a potent allergen widely used in perfumes has caused a highly sensitized cohort of individuals. Judged from the elicitation profile, chloroatranol is the most potent allergen present in consumer products today.
A Integrated Circuit for a Biomedical Capacitive Pressure Transducer
NASA Astrophysics Data System (ADS)
Smith, Michael John Sebastian
Medical research has an urgent need for a small, accurate, stable, low-power, biocompatible and inexpensive pressure sensor with a zero to full-scale range of 0-300 mmHg. An integrated circuit (IC) for use with a capacitive pressure transducer was designed, built and tested. The random pressure measurement error due to resolution and non-linearity is (+OR-)0.4 mmHg (at mid-range with a full -scale of 300 mmHg). The long-term systematic error due to falling battery voltage is (+OR-)0.6 mmHg. These figures were calculated from measurements of temperature, supply dependence and non-linearity on completed integrated circuits. The sensor IC allows measurement of temperature to (+OR-)0.1(DEGREES)C to allow for temperature compensation of the transducer. Novel micropower circuit design of the system components enabled these levels of accuracy to be reached. Capacitance is measured by a new ratiometric scheme employing an on -chip reference capacitor. This method greatly reduces the effects of voltage supply, temperature and manufacturing variations on the sensor circuit performance. The limits on performance of the bandgap reference circuit fabricated with a standard bipolar process using ion-implanted resistors were determined. Measurements confirm the limits of temperature stability as approximately (+OR-)300 ppm/(DEGREES)C. An exact analytical expression for the period of the Schmitt trigger oscillator, accounting for non-constant capacitor charging current, was formulated. Experiments to test agreement with theory showed that prediction of the oscillator period was very accurate. The interaction of fundamental and practical limits on the scaling of the transducer size was investigated including a correction to previous theoretical analysis of jitter in an RC oscillator. An areal reduction of 4 times should be achievable.
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.
2012-01-01
Initial optical communications experiments with a Vertex polished aluminum panel have been described. The polished panel was mounted on the main reflector of the DSN's research antenna at DSS-13. The PSF was recorded via remotely controlled digital camera mounted on the subreflector structure. Initial PSF generated by Jupiter showed significant tilt error and some mechanical deformation. After upgrades, the PSF improved significantly, leading to much better concentration of light. Communications performance of the initial and upgraded panel structure were compared. After the upgrades, simulated PPM symbol error probability decreased by six orders of magnitude. Work is continuing to demonstrate closed-loop tracking of sources from zenith to horizon, and better characterize communications performance in realistic daytime background environments.
Thin film absorption characterization by focus error thermal lensing
NASA Astrophysics Data System (ADS)
Domené, Esteban A.; Schiltz, Drew; Patel, Dinesh; Day, Travis; Jankowska, E.; Martínez, Oscar E.; Rocca, Jorge J.; Menoni, Carmen S.
2017-12-01
A simple, highly sensitive technique for measuring absorbed power in thin film dielectrics based on thermal lensing is demonstrated. Absorption of an amplitude modulated or pulsed incident pump beam by a thin film acts as a heat source that induces thermal lensing in the substrate. A second continuous wave collimated probe beam defocuses after passing through the sample. Determination of absorption is achieved by quantifying the change of the probe beam profile at the focal plane using a four-quadrant detector and cylindrical lenses to generate a focus error signal. This signal is inherently insensitive to deflection, which removes noise contribution from point beam stability. A linear dependence of the focus error signal on the absorbed power is shown for a dynamic range of over 105. This technique was used to measure absorption loss in dielectric thin films deposited on fused silica substrates. In pulsed configuration, a single shot sensitivity of about 20 ppm is demonstrated, providing a unique technique for the characterization of moving targets as found in thin film growth instrumentation.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
Doan, L; Forrest, H; Fakis, A; Craig, J; Claxton, L; Khare, M
2012-10-01
Clostridium difficile spores can survive in the environment for months or years, and contaminated environmental surfaces are important sources of nosocomial C. difficile transmission. To compare the clinical and cost effectiveness of eight C. difficile environmental disinfection methods for the terminal cleaning of hospital rooms contaminated with C. difficile spores. This was a novel randomized prospective study undertaken in three phases. Each empty hospital room was disinfected, then contaminated with C. difficile spores and disinfected with one of eight disinfection products: hydrogen peroxide vapour (HPV; Bioquell Q10) 350-700 parts per million (ppm); dry ozone at 25 ppm (Meditrox); 1000 ppm chlorine-releasing agent (Actichlor Plus); microfibre cloths (Vermop) used in combination with and without a chlorine-releasing agent; high temperature over heated dry atomized steam cleaning (Polti steam) in combination with a sanitizing solution (HPMed); steam cleaning (Osprey steam); and peracetic acid wipes (Clinell). Swabs were inoculated on to C. difficile-selective agar and colony counts were performed pre and post disinfection for each method. A cost-effectiveness analysis was also undertaken comparing all methods to the current method of 1000 ppm chlorine-releasing agent (Actichlor Plus). Products were ranked according to the log(10) reduction in colony count from contamination phase to disinfection. The three statistically significant most effective products were hydrogen peroxide (2.303); 1000 ppm chlorine-releasing agent (2.223) and peracetic acid wipes (2.134). The cheaper traditional method of using a chlorine-releasing agent for disinfection was as effective as modern methods. Copyright © 2012 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Combination effect of fluoride dentifrices and varnish on deciduous enamel demineralization.
Gatti, Alessandra; Camargo, Lucila Basto; Imparato, José Carlos Pettorossi; Mendes, Fausto Medeiros; Raggio, Daniela Prócida
2011-01-01
The aim of this study was to evaluate the anticaries potential of 500 or 1100 ppm F dentifrices combined with fluoride varnish using a pH-cycling regimen. Seventy primary canines were covered with nail polish, leaving a 4×4 mm window on their buccal surface, and randomly assigned into 7 groups (n = 10): S: sound enamel not submitted to the pH-cycling regimen or treatment; N: negative control, submitted to the pH-cycling regimen without any treatment; D1 and D2: subjected to the pH-cycling regimen and treated twice daily with 1100 or 500 ppm F dentifrice, respectively; VF: fluoride varnish (subjected to F-varnish before and in the middle of the pH-cycling regimen); and VF+D1 and VF+D2. After 10 days, the teeth were sectioned, and enamel demineralization was assessed by cross-sectional hardness at different distances from the dental surface. Data were analyzed using a two-way ANOVA followed by Tukey's test. Dentifrice with 1100 ppm F and the combination of F-varnish with the dentifrices significantly reduced enamel demineralization compared with the negative control (p < 0.05), but the isolated effects of F-varnish and dentifrice with low concentration were not significant (p > 0.05). The effect of combining F-varnish with the dentifrices was not greater than the effect of the dentifrices alone (p < 0.05). The data suggest that the combination of F-varnish with dentifrices containing 500 and 1100 ppm F is not more effective in reducing demineralization in primary teeth than the isolated effect of dentifrice containing 1100 ppm F.
Reduced exercise time in competitive simulations consequent to low level ozone exposure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schelegle, E.S.; Adams, W.C.
Ten highly trained endurance athletes were studied to determine the effects of exposure to low ozone (O/sub 3/) concentrations on simulated competitive endurance performance and associated physiological and subjective symptom responses. Each subject was randomly exposed to filtered air (FA), and to 0.12, 0.18, and 0.24 ppm O/sub 3/ while performing a 1 h competitive simulation protocol on a bicycle ergometer. Endurance performance was evaluated by the number of subjects unable to complete rides (last 30 min at an intense work load of approximately 86% VO/sub 2/max). All subjects completed the FA exposure, whereas one, five, and seven subjects didmore » not complete the 0.12, 0.18, and 0.24 ppm O/sub 3/ exposures, respectively. Statistical analysis indicated a significant (P less than 0.05) increase in the inability of subjects to complete the competitive simulations with increasing O/sub 3/ concentration, including a significant difference between the 0.24 ppm O/sub 3/ and FA exposure. Significant decreases (P less than 0.05) were also observed following the 0.18 and 0.24 ppm O/sub 3/ exposures, respectively, in forced vital capacity (-7.8 and -9.9%), and forced expiratory volume in 1 s (-5.8 and -10.5%). No significant O/sub 3/ effect was observed for exercise respiratory metabolism or ventilatory pattern responses. However, the number of reported subjective symptoms increased significantly following the 0.18 and 0.24 ppm O/sub 3/ protocols. These data demonstrate significant decrements in simulated competitive endurance performance and in pulmonary function, with accompanying enhanced subjective symptoms, following exposure to low O/sub 3/ levels commonly observed in numerous metropolitan environments during the summer months.« less
Physiological effects of hydrogen sulfide inhalation during exercise in healthy men.
Bhambhani, Y; Singh, M
1991-11-01
Occupational exposure to hydrogen sulfide (H2S) is prevalent in a variety of industries. H2S when inhaled 1) is oxidized into a sulfate or a thiosulfate by oxygen bound to hemoglobin and 2) suppresses aerobic metabolism by inhibiting cytochrome oxidase (c and aa3) activity in the electron transport chain. The purpose of this study was to examine the acute effects of oral inhalation of H2S on the physiological responses during graded cycle exercise performed to exhaustion in healthy male subjects. Sixteen volunteers were randomly exposed to 0 (control), 0.5, 2.0, and 5.0 ppm H2S on four separate occasions. Compared with the control values, the results indicated that the heart rate and expired ventilation were unaffected as a result of the H2S exposures during submaximal and maximal exercise. The oxygen uptake had a tendency to increase, whereas carbon dioxide output had a tendency to decrease as a result of the H2S exposures, but only the 5.0 ppm exposure resulted in a significantly higher maximum oxygen uptake. Blood lactate concentrations increased significantly during submaximal and maximal exercise as a result of the 5.0 ppm exposure. Despite these large increases in lactate concentration, the maximal power output of the subjects was not significantly altered as a result of the 5.0 ppm H2S exposure. It was concluded that healthy young male subjects could safely exercise at their maximum metabolic rates while breathing 5.0 ppm H2S without experiencing a significant reduction in their maximum physical work capacity during short-term incremental exercise.
Presence of lead in paint of toys sold in stores of the formal market of Bogotá, Colombia.
Mateus-García, A; Ramos-Bonilla, J P
2014-01-01
Lead (Pb) is a non-essential metal. Exposure to lead has been associated with adverse health effects in both children and adults. Lead content in paint used in toys or children's products has been identified as both a potential and preventable source of childhood lead exposure. Twenty-four stores located in Bogotá (Colombia) were selected by cluster sampling to participate in the study. A random sample of 96 toys was purchased at these stores. Since one toy can have different paint colors, a total of 116 paint samples from 96 toys were analyzed for lead content. Paint samples were prepared by microwave digestion and lead was quantified using ICP-OES. For quality control purposes of the analytical method, spike samples and a certified reference material (NIST SRM 2582) were used. The lead content in paint ranged from below the method detection limit (5ppm) to 47,600ppm, with an average Pb concentration of 1024ppm and a median concentration of 5ppm. Eight (8) paint samples removed from five toys had lead concentrations exceeding the US regulatory limit for total lead content (90ppm). Brown paint and toys manufactured in Colombia were significantly associated with high concentrations of lead in paint. Furthermore, a statistically significant interaction between these two variables was also found. The results suggest that there is a potential risk of lead exposure from paint of toys sold in the formal market of Bogotá. Therefore, the implementation of a national surveillance program of lead content in children products is urgently needed. The risk of children's lead exposure identified in this study, which is completely preventable, could be present also in other developing countries. © 2013 Published by Elsevier Inc.
What Randomized Benchmarking Actually Measures
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...
2017-09-28
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
High-fluoride toothpaste: a multicenter randomized controlled trial in adults
Srinivasan, Murali; Schimmel, Martin; Riesen, Martine; Ilgner, Alexander; Wicht, Michael J; Warncke, Michael; Ellwood, Roger P; Nitschke, Ina; Müller, Frauke; Noack, Michael J
2014-01-01
Objective The aim of this single – blind, multicenter, parallel, randomized controlled trial was to evaluate the effectiveness of the application of a high-fluoride toothpaste on root caries in adults. Methods Adult patients (n = 130, ♂ = 74, ♀ = 56; mean age ± SD: 56.9 ± 12.9) from three participating centers, diagnosed with root caries, were randomly allocated into two groups: Test (n = 64, ♂ = 37, ♀ = 27; lesions = 144; mean age: 59.0 ± 12.1; intervention: high-fluoride toothpaste with 5000 ppm F), and Control (n = 66, ♂ = 37, ♀ = 29; lesions = 160; mean age: 54.8 ± 13.5; intervention: regular-fluoride toothpaste with 1350 ppm F) groups. Clinical examinations and surface hardness scoring of the carious lesions were performed for each subject at specified time intervals (T0 – at baseline before intervention, T1 – at 3 months and T2 – at 6 months after intervention). Mean surface hardness scores (HS) were calculated for each patient. Statistical analyses comprised of two-way analysis of variance and post hoc comparisons using the Bonferroni–Dunn correction. Results At T0, there was no statistical difference between the two groups with regard to gender (P = 0.0682, unpaired t-test), or age (P = 0.9786, chi-squared test), and for the overall HS (Test group: HS = 3.4 ± 0.61; Control group: HS = 3.4 ± 0.66; P = 0.8757, unpaired t-test). The anova revealed significantly better HS for the test group than for the control groups (T1: Test group: HS = 2.9 ± 0.67; Control group: HS = 3.1 ± 0.75; T2: Test group: HS = 2.4 ± 0.81; Control group: HS = 2.8 ± 0.79; P < 0.0001). However, the interaction term time-point*group was not significant. Conclusions The application of a high-fluoride containing dentifrice (5000 ppm F) in adults, twice daily, significantly improves the surface hardness of otherwise untreated root caries lesions when compared with the use of regular fluoride containing (1350 ppm F) toothpastes. PMID:24354454
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .
Performance Analysis for the New g-2 Experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stratakis, Diktys; Convery, Mary; Crmkovic, J.
2016-06-01
The new g-2 experiment at Fermilab aims to measure the muon anomalous magnetic moment to a precision of ±0.14 ppm - a fourfold improvement over the 0.54 ppm precision obtained in the g-2 BNL E821experiment. Achieving this goal requires the delivery of highly polarized 3.094 GeV/c muons with a narrow ±0.5% Δp/p acceptance to the g-2 storage ring. In this study, we describe a muon capture and transport scheme that should meet this requirement. First, we present the conceptual design of our proposed scheme wherein we describe its basic features. Then, we detail its performance numerically by simulating the pionmore » production in the (g-2) production target, the muon collection by the downstream beamline optics as well as the beam polarization and spin-momentum correlation up to the storage ring. The sensitivity in performance of our proposed channel against key parameters such as magnet apertures and magnet positioning errors is analyzed« less
Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian
The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functionalmore » characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.« less
Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation
NASA Astrophysics Data System (ADS)
Lychak, Oleh V.; Holyns'kiy, Ivan S.
2016-03-01
The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.
Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes
NASA Astrophysics Data System (ADS)
Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.
2017-11-01
Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.
Toward Space-like Photometric Precision from the Ground with Beam-shaping Diffusers
NASA Astrophysics Data System (ADS)
Stefansson, Gudmundur; Mahadevan, Suvrath; Hebb, Leslie; Wisniewski, John; Huehnerhoff, Joseph; Morris, Brett; Halverson, Sam; Zhao, Ming; Wright, Jason; O'rourke, Joseph; Knutson, Heather; Hawley, Suzanne; Kanodia, Shubham; Li, Yiting; Hagen, Lea M. Z.; Liu, Leo J.; Beatty, Thomas; Bender, Chad; Robertson, Paul; Dembicky, Jack; Gray, Candace; Ketzeback, William; McMillan, Russet; Rudyk, Theodore
2017-10-01
We demonstrate a path to hitherto unachievable differential photometric precisions from the ground, both in the optical and near-infrared (NIR), using custom-fabricated beam-shaping diffusers produced using specialized nanofabrication techniques. Such diffusers mold the focal plane image of a star into a broad and stable top-hat shape, minimizing photometric errors due to non-uniform pixel response, atmospheric seeing effects, imperfect guiding, and telescope-induced variable aberrations seen in defocusing. This PSF reshaping significantly increases the achievable dynamic range of our observations, increasing our observing efficiency and thus better averages over scintillation. Diffusers work in both collimated and converging beams. We present diffuser-assisted optical observations demonstrating {62}-16+26 ppm precision in 30 minute bins on a nearby bright star 16 Cygni A (V = 5.95) using the ARC 3.5 m telescope—within a factor of ˜2 of Kepler's photometric precision on the same star. We also show a transit of WASP-85-Ab (V = 11.2) and TRES-3b (V = 12.4), where the residuals bin down to {180}-41+66 ppm in 30 minute bins for WASP-85-Ab—a factor of ˜4 of the precision achieved by the K2 mission on this target—and to 101 ppm for TRES-3b. In the NIR, where diffusers may provide even more significant improvements over the current state of the art, our preliminary tests demonstrated {137}-36+64 ppm precision for a K S = 10.8 star on the 200 inch Hale Telescope. These photometric precisions match or surpass the expected photometric precisions of TESS for the same magnitude range. This technology is inexpensive, scalable, easily adaptable, and can have an important and immediate impact on the observations of transits and secondary eclipses of exoplanets.
NASA Astrophysics Data System (ADS)
Baylon, Jorge L.; Stremme, Wolfgang; Grutter, Michel; Hase, Frank; Blumenstock, Thomas
2017-07-01
In this investigation we analyze two common optical configurations to retrieve CO2 total column amounts from solar absorption infrared spectra. The noise errors using either a KBr or a CaF2 beam splitter, a main component of a Fourier transform infrared spectrometer (FTIR), are quantified in order to assess the relative precisions of the measurements. The configuration using a CaF2 beam splitter, as deployed by the instruments which contribute to the Total Carbon Column Observing Network (TCCON), shows a slightly better precision. However, we show that the precisions in XCO2 ( = 0.2095 ṡ Total Column CO2Total Column O2) retrieved from > 96 % of the spectra measured with a KBr beam splitter fall well below 0.2 %. A bias in XCO2 (KBr - CaF2) of +0.56 ± 0.25 ppm was found when using an independent data set as reference. This value, which corresponds to +0.14 ± 0.064 %, is slightly larger than the mean precisions obtained. A 3-year XCO2 time series from FTIR measurements at the high-altitude site of Altzomoni in central Mexico presents clear annual and diurnal cycles, and a trend of +2.2 ppm yr-1 could be determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.
Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less
Testing the Recognition and Perception of Errors in Context
ERIC Educational Resources Information Center
Brandenburg, Laura C.
2015-01-01
This study tests the recognition of errors in context and whether the presence of errors affects the reader's perception of the writer's ethos. In an experimental, posttest only design, participants were randomly assigned a memo to read in an online survey: one version with errors and one version without. Of the six intentional errors in version…
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Affum, A O; Shiloh, D O; Adomako, D
2013-06-01
In Ghana anti-malaria herbal medicines or products are used to compliment commercial drugs in treatment and prevention of Plasmodium falciparum infections. In this study, four common aqueous based anti-malaria herbal products (coded HEB, KFE, MDM and NIB) which are used by Ghanaian population from pharmacy/herbal stores in the Madina area, Accra were blindly and randomly sampled for cadmium (Cd), arsenic (As) and Lead (Pb) analysis using Atomic Absorption Spectrophotometry technique. Arsenic concentrations were 1.087 μg/mL (108.7%), 1.027 μg/mL (102.7%), 0.330 μg/mL (33.0%) and 0.274 μg/mL (27.4%) in MDM, KFE, NIB and HEB respectively. Arsenic concentration determined in MDM and KFE were above the maximum permissible limit of 1.0 ppm determined by WHO/FAO. Cadmium concentration in each of the four products as well as lead concentration in KFE, NIB and HEB were below the detection limit of <0.002 mg/mL (Cd) and <0.005 mg/mL (Pb) respectively. The maximum permissible limits for Pb and Cd determined by WHO/FAO are 10.0 ppm and 0.3 ppm respectively. Thus, random assessment on the safety of some ready-to-use aqueous based anti-malaria herbal products on the market is necessary to prevent public health hazards associated with consuming these plant extracts. Although lead and cadmium concentration in the anti-malaria herbal products were below the maximum permissible limits, their cumulative effect on the health of an individual which consume recommended volume of not less than 1000 mL for effective malaria parasite clearance cannot be ignored. Copyright © 2013 Elsevier Ltd. All rights reserved.
Shabbir, Javid
2018-01-01
In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519
Perceptions of Randomness: Why Three Heads Are Better than Four
ERIC Educational Resources Information Center
Hahn, Ulrike; Warren, Paul A.
2009-01-01
A long tradition of psychological research has lamented the systematic errors and biases in people's perception of the characteristics of sequences generated by a random mechanism such as a coin toss. It is proposed that once the likely nature of people's actual experience of such processes is taken into account, these "errors" and "biases"…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
Lidar Observations of Atmospheric CO2 Column During 2014 Summer Flight Campaigns
NASA Technical Reports Server (NTRS)
Lin, Bing; Harrison, F. Wallace; Fan, Tai-Fang
2015-01-01
Advanced knowledge in atmospheric CO2 is critical in reducing large uncertainties in predictions of the Earth' future climate. Thus, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) from space was recommended by the U.S. National Research Council to NASA. As part of the preparation for the ASCENDS mission, NASA Langley Research Center (LaRC) and Exelis, Inc. have been collaborating in development and demonstration of the Intensity-Modulated Continuous-Wave (IM-CW) lidar approach for measuring atmospheric CO2 column from space. Airborne laser absorption lidars such as the Multi-Functional Fiber Laser Lidar (MFLL) and ASCENDS CarbonHawk Experiment Simulator (ACES) operating in the 1.57 micron CO2 absorption band have been developed and tested to obtain precise atmospheric CO2 column measurements using integrated path differential absorption technique and to evaluate the potential of the space ASCENDS mission. This presentation reports the results of our lidar atmospheric CO2 column measurements from 2014 summer flight campaign. Analysis shows that for the 27 Aug OCO-2 under flight over northern California forest regions, significant variations of CO2 column approximately 2 ppm) in the lower troposphere have been observed, which may be a challenge for space measurements owing to complicated topographic condition, heterogeneity of surface reflection and difference in vegetation evapotranspiration. Compared to the observed 2011 summer CO2 drawdown (about 8 ppm) over mid-west, 2014 summer drawdown in the same region measured was much weak (approximately 3 ppm). The observed drawdown difference could be the results of the changes in both meteorological states and the phases of growing seasons. Individual lidar CO2 column measurements of 0.1-s integration were within 1-2 ppm of the CO2 estimates obtained from on-board in-situ sensors. For weak surface reflection conditions such as ocean surfaces, the 1- s integrated signal-to-noise ratio (SNR) of lidar measurements at 11 km altitude reached 376, which was equivalent to a 10-s CO2 error 0.33 ppm. For the entire processed 2014 summer flight campaign data, the mean differences between lidar remote sensed and in-situ estimated CO2 values were about -0.013 ppm. These results indicate that current laser absorption lidar approach could meet space measurement requirements for CO2 science goals.
N-acetylcysteine for therapy-resistant tobacco use disorder: a pilot study.
Prado, Eduardo; Maes, Michael; Piccoli, Luiz Gustavo; Baracat, Marcela; Barbosa, Décio Sabattini; Franco, Olavo; Dodd, Seetal; Berk, Michael; Vargas Nunes, Sandra Odebrecht
2015-09-01
N-Acetylcysteine (NAC) may have efficacy in treating tobacco use disorder (TUD) by reducing craving and smoking reward. This study examines whether treatment with NAC may have a clinical efficacy in the treatment of TUD. A 12-week double blind randomized controlled trial was conducted to compare the clinical efficacy of NAC 3 g/day versus placebo. We recruited 34 outpatients with therapy resistant TUD concurrently treated with smoking-focused group behavioral therapy. Participants had assessments of daily cigarette use (primary outcome), exhaled carbon monoxide (CO(EXH)) (secondary outcome), and quit rates as defined by CO(EXH) < 6 ppm. Depression was measured with the Hamilton Depression Rating Scale (HDRS). Data were analyzed using conventional and modified intention-to-treat endpoint analyses. NAC treatment significantly reduced the daily number of cigarettes used (Δ mean ± SD = -10.9 ± 7.9 in the NAC-treated versus -3.2 ± 6.1 in the placebo group) and CO(EXH) (Δ mean ± SD = -10.4 ± 8.6 ppm in the NAC-treated versus -1.5 ± 4.5 ppm in the placebo group); 47.1% of those treated with NAC versus 21.4% of placebo-treated patients were able to quit smoking as defined by CO(EXH) < 6 ppm. NAC treatment significantly reduced the HDRS score in patients with tobacco use disorder. These data show that treatment with NAC may have a clinical efficacy in TUD. NAC combined with appropriate psychotherapy appears to be an efficient treatment option for TUD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linn, W.S.; Avol, E.L.; Shamoo, D.A.
1986-07-01
Twenty-four healthy, well-conditioned young adult male volunteers, free of asthma or clinical respiratory allergies, were exposed to purified air containing ozone (O3) at 0.16, 0.14, 0.12, 0.10, 0.08, and 0.00 part per million (ppm). Exposures were separated by 2-week intervals, occurred in random order, and lasted 2 hours each. Temperature was 32 +/- 1/sup 0/C and relative humidity was 38 +/- 3%, simulating Los Angeles area smog conditions. Subjects exercised 15 minutes of each half hour, attaining ventilation rates averaging 68 L/min (approximately 35 L/min per m2 body surface area). Lung function was measured pre-exposure and after 1 hr andmore » 2 hr of exposure. Airway responsiveness to a cold-air challenge was measured immediately following the 2-hr exposure. Symptoms were recorded before, during, and for one-week periods following exposures. For the group as a whole, no meaningful untoward effects were found except for a mild typical respiratory irritant response after 2 hr exposure to 0.16 ppm O3. Two individual subjects showed possible responses at 0.14 ppm, and one of them also at 0.12 ppm. In comparison to some previous investigations, this study showed generally less response to O3. The comparative lack of response may relate to the favorable clinical status of the subjects, the pattern of exercise during exposure, or some other factor not yet identified.« less
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.
2004-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yueqi; Lava, Pascal; Reu, Phillip
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...
2015-12-23
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
NASA Astrophysics Data System (ADS)
Maftuch
2017-05-01
Negative impacts of antibiotics and chemical substance usage in aquaculture demand the researchers discover more efficient alternative yet environmentally friendly to overcome fish diseases. One alternative is by using Bawang Dayak (Eleutherine palmifolia (L.) Merr). This research aimed to reveal the effect of Bawang Dayak crude extract towards the inhibition zone of A. hydrophilia, V. harveyi, and P. fluorescens bacteria. Furthermore, it was also conducted to investigate the carp (C. carpio) hematology which was infected with A. hydrophila bacteria, and find the most appropriate dose of Bawang Dayak crude extract to inhibit the bacteria. This experimental research was performed by using Completely Randomized Design with 4 treatments and 3 replications. The best result of the zone of inhibition test in A. hydrophila bacteria was at the dose of 70 ppm while V. harveyi and P. Fluorescens bacteria were at the dose of 85 ppm. Then, fish hematology was found best at the dose of 80 ppm. Bawang Dayak crude extract was significant towards the inhibition zone of A. hydrophila, V. harveyi and P. Fluorescens bacteria, and carp hematology which was infected with A. hydrophila bacteria.
Effects of dietary cadmium on mallard ducklings
Cain, B.W.; Sileo, L.; Franson, J.C.; Moore, J.
1983-01-01
Mallard (Anas platyrhynchos) ducklings were fed cadmium in the diet at 0, 5, 10, or 20 ppm from 1 day of age until 12 weeks of age. At 4-week intervals six males and six females from each dietary group were randomly selected, bled by jugular venipuncture, and necropsied. Significant decreases in packed cell volume (PCV) and hemoglobin (Hb) concentration and a significant increase in serum glutamic pyruvic transaminase (GPT) were found at 8 weeks of age in ducklings fed 20 ppm cadmium. Mild to severe kidney lesions were evident in ducklings fed 20 ppm cadmium for 12 weeks. No other blood chemistry measurement, hematological parameter, or tissue histopathological measurement indicated a reaction to cadmium ingestion. Body weight, liver weight, and the ratio of the femur weight to length were not affected by dietary cadmium. Femur cadmium concentration In all ducklings 12 weeks of age declined from the values detected at 4 and 8 weeks of age. Liver cadmium concentrations were significantly higher in relation to the increased dietary levels and in relation to the length of time the ducklings were fed the cadmium diets. At 12 weeks of age the cadmium concentration in liver tissue was twice that in the diet.
Combinatorial pulse position modulation for power-efficient free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, James M.; Vanderaar, M.; Wagner, P.; Bibyk, Steven
1993-01-01
A new modulation technique called combinatorial pulse position modulation (CPPM) is presented as a power-efficient alternative to quaternary pulse position modulation (QPPM) for direct-detection, free-space laser communications. The special case of 16C4PPM is compared to QPPM in terms of data throughput and bit error rate (BER) performance for similar laser power and pulse duty cycle requirements. The increased throughput from CPPM enables the use of forward error corrective (FEC) encoding for a net decrease in the amount of laser power required for a given data throughput compared to uncoded QPPM. A specific, practical case of coded CPPM is shown to reduce the amount of power required to transmit and receive a given data sequence by at least 4.7 dB. Hardware techniques for maximum likelihood detection and symbol timing recovery are presented.
Cluster-randomized xylitol toothpaste trial for early childhood caries prevention.
Chi, Donald L; Tut, Ohnmar; Milgrom, Peter
2014-01-01
The purpose of this study was to assess the efficacy of supervised tooth-brushing with xylitol toothpaste to prevent early childhood caries (ECC) and reduce mutans streptococci. In this cluster-randomized efficacy trial, 196 four- to five-year-old children in four Head Start classrooms in the Marshall Islands were randomly assigned to supervised toothbrushing with 1,400 ppm/31 percent fluoride xylitol or 1,450 ppm fluoride sorbitol toothpaste. We hypothesized that there would be no difference in efficacy between the two types of toothpaste. The primary outcome was the surface-level primary molar caries increment (d(2-3)mfs) after six months. A single examiner was blinded to classroom assignments. Two classrooms were assigned to the fluoride-xylitol group (85 children), and two classrooms were assigned to the fluoride-sorbitol group (83 children). The child-level analyses accounted for clustering. There was no difference between the two groups in baseline or end-of-trial mean d(2-3)mfs. The mean d(2-3)mfs increment was greater in the fluoride-xylitol group compared to the fluoride-sorbitol group (2.5 and 1.4 d(2-3)mfs, respectively), but the difference was not significant (95% confidence interval: -0.17, 2.37; P=.07). No adverse effects were reported. After six months, brushing with a low-strength xylitol/fluoride tooth-paste is no more efficacious in reducing ECC than a fluoride-only toothpaste in a high caries-risk child population.
Cluster-randomized xylitol toothpaste trial for early childhood caries prevention
Chi, Donald L.; Tut, Ohnmar K.; Milgrom, Peter
2013-01-01
Purpose We assessed the efficacy of supervised toothbrushing with xylitol toothpaste to prevent early childhood caries (ECC) and to reduce mutans streptococci (MS). Methods In this cluster-randomized efficacy trial, 4 Head Start classrooms in the Marshall Islands were randomly assigned to supervised toothbrushing with 1,400ppm/31% fluoride-xylitol (Epic Dental, Provo, UT) or 1,450ppm fluoride-sorbitol toothpaste (Colgate-Palmolive, New York, NY) (N=196 children, ages 4–5 yrs). We hypothesized no difference in efficacy between the two types of toothpaste. The primary outcome was primary molar d2-3mfs increment after 6 mos. A single examiner was blinded to classroom assignments. Two classrooms were assigned to the fluoride-xylitol group (85 children) and 2 classrooms to the fluoride-sorbitol group (83 children). The child-level analyses accounted for clustering. Results There was no difference between the two groups in baseline or end-of-trial mean d2-3mfs. The mean d2-3mfs increment was greater in the fluoride-xylitol group compared to the fluoride-sorbitol group (2.5 and 1.4 d2-3mfs, respectively), but the difference was not significant (95% CI:−0.17, 2.37;P=0.07). No adverse effects were reported. Conclusion After 6 mos, brushing with a low strength xylitol/fluoride toothpaste is no more efficacious in reducing ECC than a fluoride only toothpaste in a high caries risk child population. PMID:24709430
NASA Astrophysics Data System (ADS)
Semenov, Z. V.; Labusov, V. A.
2017-11-01
Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.
NASA Astrophysics Data System (ADS)
Sun, Hong; Wu, Qian-zhong
2013-09-01
In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.
Improved EPMA Trace Element Accuracy Using a Matrix Iterated Quantitative Blank Correction
NASA Astrophysics Data System (ADS)
Donovan, J. J.; Wark, D. A.; Jercinovic, M. J.
2007-12-01
At trace element levels below several hundred PPM, accuracy is more often the limiting factor for EPMA quantification rather than precision. Modern EPMA instruments equipped with low noise detectors, counting electronics and large area analyzing crystals can now routinely achieve sensitivities for most elements in the 10 to 100 PPM levels (or even lower). But due to various sample and instrumental artifacts in the x-ray continuum, absolute accuracy is often the limiting factor for ultra trace element quantification. These artifacts have various mechanisms, but are usually attributed to sample artifacts (e.g., sample matrix absorption edges)1, detector artifacts (e.g., Ar or Xe absorption edges) 2 and analyzing crystal artifacts (extended peak tails preventing accurate determination of the true background and ¡§negative peaks¡¨ or ¡§holes¡¨ in the x-ray continuum). The latter being first described3 by Self, et al. and recently documented for the Ti kÑ in quartz geo-thermometer. 4 Ti (ka) Ti (ka) Ti (ka) Ti (ka) Ti (ka) Si () O () Total Average: -.00146 -.00031 -.00180 .00013 .00240 46.7430 53.2563 99.9983 Std Dev: .00069 .00075 .00036 .00190 .00117 .00000 .00168 .00419 The general magnitude of these artifacts can be seen in the above analyses of Ti ka in a synthetic quartz standard. The values for each spectrometer/crystal vary systematically from ¡V18 PPM to + 24 PPM. The exact mechanism for these continuum ¡§holes¡¨ is not known but may be related to secondary lattice diffraction occurring at certain Bragg angles depending on crystal mounting orientation for non-isometric analyzing crystals5. These x-ray continuum artifacts can produce systematic errors at levels up to 100 PPM or more depending on the particular analytical situation. In order to correct for these inaccuracies, a ¡§blank¡¨ correction has been developed that applies a quantitative correction to the measured x-ray intensities during the matrix iteration, by calculating the intensity contribution from the systematic quantitative offset from a known (usually zero level) blank standard. Preliminary results from this new matrix iterated trace element blank correction demonstrate that systematic errors can be reduced to single digit PPM levels for many situations. 1B.W. Robinson, N.G. Ware and D.G.W. Smith, 1998. "Modern Electron-Microprobe Trace-Element Analysis in Mineralogy". In Cabri, L.J. and Vaughan, D.J., Eds. "Modern Approaches to Ore and Environmental Mineralogy", Short Course 27. Mineralogical Association of Canada, Ottawa 153-180 2Remond, G., Myklebust, R. Fialin, M. Nockolds, C. Phillips, M. Roques-Carmes, C. ¡§Decomposition of Wavelength Dispersive X-ray Spectra¡¨, Journal of Research of the National Institute of Standards and Technology (J. Res. Natl. Inst. Stand. Technol., v. 107, 509-529 (2002) 3Self, P.G., Norrish, K., Milnes, A.R., Graham, J. & Robinson, B.W. (1990): Holes in the Background in XRS. X-ray Spectrom. 19 (2), 59-61 4Wark, DA, and Watson, EB, 2006, TitaniQ: A Titanium-in-Quartz geothermometer: Contributions to Mineralogy and Petrology, 152:743-754, doi: 10.1007/s00410-006-0132-308
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
Immobile defects in ferroelastic walls: Wall nucleation at defect sites
NASA Astrophysics Data System (ADS)
He, X.; Salje, E. K. H.; Ding, X.; Sun, J.
2018-02-01
Randomly distributed, static defects are enriched in ferroelastic domain walls. The relative concentration of defects in walls, Nd, follows a power law distribution as a function of the total defect concentration C: N d ˜ C α with α = 0.4 . The enrichment Nd/C ranges from ˜50 times when C = 10 ppm to ˜3 times when C = 1000 ppm. The resulting enrichment is due to nucleation at defect sites as observed in large scale MD simulations. The dynamics of domain nucleation and switching is dependent on the defect concentration. Their energy distribution follows the power law with exponents during yield between ɛ ˜ 1.82 and 2.0 when the defect concentration increases. The power law exponent is ɛ ≈ 2.7 in the plastic regime, independent of the defect concentration.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Hydrocarbon-Fueled Scramjet Research at Hypersonic Mach Numbers
2005-03-31
oxide O atomic oxygen 02 molecular oxygen OH hydroxyl radical ppm parts per million PD photodiode PLLF planar laser-induced fluorescence PMT...photomultiplier tube RAM random access memory RANS Reynolds-averaged Navier-Stokes RET rotational energy transfer TDLAS tunable diode laser absorption...here extend this knowledge base to flight at Mach 11.5. Griffiths (2004) used a tunable diode laser absorption spectroscopy ( TDLAS ) system to measure
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M
2009-05-02
Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
Attia, T E; Shendi, E H; Shehata, M A
2015-02-01
A detailed gamma ray spectrometry survey was carried out to make an action in environmental impact assessment of urbanization and industrialization on Port Said city, Egypt. The concentrations of the measured radioelements U-238, Th-232 in ppm, and K-40 %, in addition to the total counts of three selected randomly dumping sites (A, B, and C) were mapped. The concentration maps represent a base line for the radioactivity in the study area in order to detect any future radioactive contamination. These concentrations are ranging between 0.2 and 21 ppm for U-238 and 0.01 to 13.4 ppm for Th-232 as well as 0.15 to 3.8 % for K-40, whereas the total count values range from 8.7 to 123.6 uR. Moreover, the dose rate was mapped using the same spectrometer and survey parameters in order to assess the radiological effect of these radioelements. The dose rate values range from 0.12 to 1.61 mSv/year. Eighteen soil samples were collected from the sites with high radioelement concentrations and dose rates to determine the activity concentrations of Ra-226, Th-232, and K-40 using HPGe spectrometer. The activity concentrations of Ra-226, Th-232, and K-40 in the measured samples range from 18.03 to 398.66 Bq kg(-1), 5.28 to 75.7 Bq kg(-1), and 3,237.88 to 583.12 Bq kg(-1), respectively. In addition to analyze heavy metal for two high reading samples (a 1 and a 10) which give concentrations of Cd and Zn elements (a 1 40 ppm and a 10 42 ppm) and (a 1 0.90 ppm and a 10 0.97 ppm), respectively, that are in the range of phosphate fertilizer products that suggested a dumped man-made waste in site A. All indicate that the measured values for the soil samples in the two sites of three falls within the world ranges of soil in areas with normal levels of radioactivity, while site A shows a potential radiological risk for human beings, and it is important to carry out dose assessment program with a specifically detailed monitoring program periodically.
Caries-preventive effect of anti-erosive and nano-hydroxyapatite-containing toothpastes in vitro.
Esteves-Oliveira, M; Santos, N M; Meyer-Lueckel, H; Wierichs, R J; Rodrigues, J A
2017-01-01
The aim of the study was to investigate the caries-preventive effect of newly developed fluoride and fluoride-free toothpastes specially designed for erosion prevention. The hypothesis was that these products might also show superior caries-inhibiting effect than regular fluoride toothpastes, since they were designed for stronger erosive acid challenges. Enamel specimens were obtained from bovine teeth and pre-demineralized (pH = 4.95/21 days) to create artificial caries lesions. Baseline mineral loss (ΔZ B ) and lesion depth (LD B ) were determined using transversal microradiography (TMR). Ninety specimens with a median ΔZ B (SD) of 6027 ± 1546 vol% × μm were selected and randomly allocated to five groups (n = 18). Treatments during pH-cycling (14 days, 4 × 60 min demineralization/day) were brushing 2×/day with AmF (1400 ppm F - , anti-caries [AC]); AmF/NaF/SnCl 2 /Chitosan (700 ppm F - /700 ppm F - /3500 ppm Sn 2+ , anti-erosion [AE1]); NaF/KNO 3 (1400 ppm F - , anti-erosion [AE2]); nano-hydroxyapatite-containing (0 ppm F - , [nHA]); and fluoride-free toothpastes (0 ppm F - , negative control [NC]). Toothpaste slurries were prepared with mineral salt solution (1:3 wt/wt). After pH-cycling specimens presenting lesion, surface loss (mainly by NC and nHA) were discarded. For the remaining 77 specimens, new TMR analyses (ΔZ E /LD E ) were performed. Changes in mineral loss (ΔΔZ = ΔZ B - ΔZ E ) and lesion depth (ΔLD = LD B - LD E ) were calculated. All toothpastes caused significantly less demineralization (lower ΔΔZ) than NC (p < 0.05, ANOVA) except for nHA. The fluoride toothpastes did not differ significantly regarding ΔΔZ and ΔLD (p > 0.05, ANOVA). While both anti-erosive and anti-caries toothpastes reduced mineral loss to a similar extent, the fluoride-free nano-hydroxyapatite-containing toothpaste seemed not to be suitable for inhibition of caries demineralization in vitro.
Does Mckuer's Law Hold for Heart Rate Control via Biofeedback Display?
NASA Technical Reports Server (NTRS)
Courter, B. J.; Jex, H. R.
1984-01-01
Some persons can control their pulse rate with the aid of a biofeedback display. If the biofeedback display is modified to show the error between a command pulse-rate and the measured rate, a compensatory (error correcting) heart rate tracking control loop can be created. The dynamic response characteristics of this control loop when subjected to step and quasi-random disturbances were measured. The control loop includes a beat-to-beat cardiotachmeter differenced with a forcing function from a quasi-random input generator; the resulting error pulse-rate is displayed as feedback. The subject acts to null the displayed pulse-rate error, thereby closing a compensatory control loop. McRuer's Law should hold for this case. A few subjects already skilled in voluntary pulse-rate control were tested for heart-rate control response. Control-law properties are derived, such as: crossover frequency, stability margins, and closed-loop bandwidth. These are evaluated for a range of forcing functions and for step as well as random disturbances.
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Yago, Martín
2017-05-01
QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.
Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.
Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J
2017-12-01
Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-10-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-03-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
NASA Astrophysics Data System (ADS)
Guarieiro, Lílian Lefol Nani; Pereira, Pedro Afonso de Paula; Torres, Ednildo Andrade; da Rocha, Gisele Olimpio; de Andrade, Jailson B.
Biodiesel is emerging as a renewable fuel, hence becoming a promising alternative to fossil fuels. Biodiesel can form blends with diesel in any ratio, and thus could replace partially, or even totally, diesel fuel in diesel engines what would bring a number of environmental, economical and social advantages. Although a number of studies are available on regulated substances, there is a gap of studies on unregulated substances, such as carbonyl compounds, emitted during the combustion of biodiesel, biodiesel-diesel and/or ethanol-biodiesel-diesel blends. CC is a class of hazardous pollutants known to be participating in photochemical smog formation. In this work a comparison was carried out between the two most widely used CC collection methods: C18 cartridges coated with an acid solution of 2,4-dinitrophenylhydrazine (2,4-DNPH) and impinger bottles filled in 2,4-DNPH solution. Sampling optimization was performed using a 2 2 factorial design tool. Samples were collected from the exhaust emissions of a diesel engine with biodiesel and operated by a steady-state dynamometer. In the central body of factorial design, the average of the sum of CC concentrations collected using impingers was 33.2 ppmV but it was only 6.5 ppmV for C18 cartridges. In addition, the relative standard deviation (RSD) was 4% for impingers and 37% for C18 cartridges. Clearly, the impinger system is able to collect CC more efficiently, with lower error than the C18 cartridge system. Furthermore, propionaldehyde was nearly not sampled by C18 system at all. For these reasons, the impinger system was chosen in our study. The optimized sampling conditions applied throughout this study were: two serially connected impingers each containing 10 mL of 2,4-DNPH solution at a flow rate of 0.2 L min -1 during 5 min. A profile study of the C1-C4 vapor-phase carbonyl compound emissions was obtained from exhaust of pure diesel (B0), pure biodiesel (B100) and biodiesel-diesel mixtures (B2, B5, B10, B20, B50, B75). The ΣCC of the emission concentrations were 20.5 ppmV for B0 and 15.7 ppmV for B100. When considering fuel blends, the measured ΣCC were 21.4 ppmV, 22.5 ppmV, 20.4 ppmV, 14.2 ppmV, 11.4 ppmV and 14.7 ppmV, respectively, for B2, B5, B10, B20, B50 and B75. Among the target CC, both formaldehyde and acetaldehyde were the major contributors to the observed total CC levels. Except for acrolein and formaldehyde, all CC showed a clear trend of reduction in the emissions from B2 to B100 (40% reduction, on average). Both individual and total CC emission factors (pg g -1 of fuel burnt) were calculated for all tested biodiesel-diesel blends. The lowest total CC emission factor (2271 pg g -1) was found when B50 was used; the individual emission factors determined (pg g -1) were: 539.7 (formaldehyde), 1411 (acetaldehyde), 30.83 (acrolein), and 310.7 (propionaldehyde).
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
Amaechi, Bennett T; Karthikeyan, Ramalingam; Mensinkai, Poornima K; Najibfard, Kaveh; Mackey, Allen C; Karlinsey, Robert L
2010-01-01
Purpose An in situ study evaluated the remineralization potential of 225 ppm fluoride (F) rinses with and without a calcium phosphate agent (TCP-Si-Ur) on eroded enamel. Methods 20 human patients participated in this IRB approved study. Enamel blocks extracted from 20 human molars were assigned to each of the three study phases (G1, G2, G3). Each block was eroded using 1% citric acid (pH = 2.5), with a slice cut from each block to establish baseline lesion parameters (ie, integrated mineral loss ΔZ, and lesion depth LD) using transverse microradiography (TMR). Participants and assigned blocks were randomly divided into three 28-day phases. The blocks were mounted into modified orthodontic brackets and bonded to the buccal surface of one of the subject’s mandibular molars. The appliance remained in the subject’s mouth for 28 days. Prior to each study phase, participants observed a one-week-washout period using a fluoride-free dentifrice. In each phase, participants brushed with the fluoride-free dentifrice for 1 min, followed by one of the following coded treatments: G1: 225 ppm F + 40 ppm TCP-Si-Ur rinse (1 min); G2: 225 ppm F rinse (1 min); G3: no rinse (saliva-only). After each phase, appliances were removed and specimens were analyzed using TMR. Results TMR data (ie, ΔZ and LD) revealed all three groups significantly remineralized eroded enamel (paired t-tests, P < 0.001). Net mineralization (% change in ΔZ, LD) were as follows (mean (std.dev): G1: 44.1 (22.6), 30.5 (27.0); G2: 30.0 (7.4), 29.4 (10.5); G3: 23.8 (16.4), 25.7 (15.5). Furthermore, G1 was found to cause significantly more remineralization than G2 (P = 0.039) and G3, (P = 0.002). Conclusion Mouthrinse containing 225 ppm F plus TCP-Si-Ur provided significantly greater remineralization relative to 225 ppm F only or saliva alone. PMID:23662086
Classification of echolocation clicks from odontocetes in the Southern California Bight.
Roch, Marie A; Klinck, Holger; Baumann-Pickering, Simone; Mellinger, David K; Qui, Simon; Soldevilla, Melissa S; Hildebrand, John A
2011-01-01
This study presents a system for classifying echolocation clicks of six species of odontocetes in the Southern California Bight: Visually confirmed bottlenose dolphins, short- and long-beaked common dolphins, Pacific white-sided dolphins, Risso's dolphins, and presumed Cuvier's beaked whales. Echolocation clicks are represented by cepstral feature vectors that are classified by Gaussian mixture models. A randomized cross-validation experiment is designed to provide conditions similar to those found in a field-deployed system. To prevent matched conditions from inappropriately lowering the error rate, echolocation clicks associated with a single sighting are never split across the training and test data. Sightings are randomly permuted before assignment to folds in the experiment. This allows different combinations of the training and test data to be used while keeping data from each sighting entirely in the training or test set. The system achieves a mean error rate of 22% across 100 randomized three-fold cross-validation experiments. Four of the six species had mean error rates lower than the overall mean, with the presumed Cuvier's beaked whale clicks showing the best performance (<2% error rate). Long-beaked common and bottlenose dolphins proved the most difficult to classify, with mean error rates of 53% and 68%, respectively.
Effects of random tooth profile errors on the dynamic behaviors of planetary gears
NASA Astrophysics Data System (ADS)
Xun, Chao; Long, Xinhua; Hua, Hongxing
2018-02-01
In this paper, a nonlinear random model is built to describe the dynamics of planetary gear trains (PGTs), in which the time-varying mesh stiffness, tooth profile modification (TPM), tooth contact loss, and random tooth profile error are considered. A stochastic method based on the method of multiple scales (MMS) is extended to analyze the statistical property of the dynamic performance of PGTs. By the proposed multiple-scales based stochastic method, the distributions of the dynamic transmission errors (DTEs) are investigated, and the lower and upper bounds are determined based on the 3σ principle. Monte Carlo method is employed to verify the proposed method. Results indicate that the proposed method can be used to determine the distribution of the DTE of PGTs high efficiently and allow a link between the manufacturing precision and the dynamical response. In addition, the effects of tooth profile modification on the distributions of vibration amplitudes and the probability of tooth contact loss with different manufacturing tooth profile errors are studied. The results show that the manufacturing precision affects the distribution of dynamic transmission errors dramatically and appropriate TPMs are helpful to decrease the nominal value and the deviation of the vibration amplitudes.
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes
Andrew D. Richardson; David Y. Hollinger; George G. Burba; Kenneth J. Davis; Lawrence B. Flanagan; Gabriel G. Katul; J. William Munger; Daniel M. Ricciuto; Paul C. Stoy; Andrew E. Suyker; Shashi B. Verma; Steven C. Wofsy; Steven C. Wofsy
2006-01-01
Measured surface-atmosphere fluxes of energy (sensible heat, H, and latent heat, LE) and CO2 (FCO2) represent the ``true?? flux plus or minus potential random and systematic measurement errors. Here, we use data from seven sites in the AmeriFlux network, including five forested sites (two of which include ``tall tower?? instrumentation), one grassland site, and one...
The Boron Isotopic Composition of Elephant Dung: Inputs to the Global Boron Budget
NASA Astrophysics Data System (ADS)
Williams, L. B.; Hervig, R. L.
2011-12-01
A survey of boron in kerogen showed isotopically light δ11B values (0 to -50%) that are distinctly different from most mineral and natural water B reservoirs. Diagenesis releases this isotopically light B into pore fluids when hydrocarbons are generated, thus enriching oilfield brines in 10B. This observation suggests that borated biomolecules (BM) are primarily tetrahedral favoring 10B, whereas 11B is preferred in trigonal coordination. Plants, with optimal concentrations up to 100ppm, contribute more B than animal remains to sediment. Elephants are one of the largest herbivores on earth, consuming 200 - 250 kg of plant material/day and producing 50 kg of manure/day. They are inefficient at digestion, thus the manure contains >50% undigested plant material. Dung samples are therefore ideal for studying the δ11B of both the food input and digested output of a significant B supply to sedimentary systems. Horse and rabbit manure were studied for comparison to evaluate B isotope variations in the food supply and potential vital effects on the output. B-content and isotopic composition of dung plant material and digested fractions were measured in the solid state by secondary ion mass spectrometry. The digests were rinsed in 1.8% mannitol, a B-complexing agent, to remove surface adsorbed-B, then air dried and Au-coated for charge compensation. Results showed that the elephant diet contains 3-13 ppm B, with an average δ11B of -20 ± 0.8% (1σ), while rabbit food had 88 ppm B with a δ11B of -50 ± 1.3 %. The digested fraction of the elephant dung contains 4-10ppm B with average δ11B values of -12 ± 1.2%. In comparison, horse manure with 11-21 ppm B has a δ11B of -10.7 ± 0.5% and rabbit manure contains 2-3 ppm B with a δ11B of -8.8 ± 1%. Boron isotope compositions of these manures are indistinguishable (within error). Clearly plant material is a major contributor of isotopically light B to sediments. The herbivores studied fractionate their total B intake in favor of 11B, thus increasing the δ11B of the (solid) digested material relative to the food source. This would not affect the overall BM input to the sediment because the dung contains the undigested plants. If we assume that the average B isotopic composition of dung, ~10ppm B at -20%, represents an average BM in sediment, and that the mass of sediments (1E24 g) is comparable to the mass of seawater with an average 5ppm B at +40%, then it is clear that BM plays a major role in balancing the global B budget. Note: This research was NOT funded by taxpayer dollars. The Phoenix Zoo kindly approved the proposal to sample elephant dung for this study and their support is greatly appreciated.
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
NASA Technical Reports Server (NTRS)
Kwon, Jin H.; Lee, Ja H.
1989-01-01
The far-field beam pattern and the power-collection efficiency are calculated for a multistage laser-diode-array amplifier consisting of about 200,000 5-W laser diode arrays with random distributions of phase and orientation errors and random diode failures. From the numerical calculation it is found that the far-field beam pattern is little affected by random failures of up to 20 percent of the laser diodes with reference of 80 percent receiving efficiency in the center spot. The random differences in phases among laser diodes due to probable manufacturing errors is allowed to about 0.2 times the wavelength. The maximum allowable orientation error is about 20 percent of the diffraction angle of a single laser diode aperture (about 1 cm). The preliminary results indicate that the amplifier could be used for space beam-power transmission with an efficiency of about 80 percent for a moderate-size (3-m-diameter) receiver placed at a distance of less than 50,000 km.
An Analysis of Computational Errors in the Use of Division Algorithms by Fourth-Grade Students.
ERIC Educational Resources Information Center
Stefanich, Greg P.; Rokusek, Teri
1992-01-01
Presents a study that analyzed errors made by randomly chosen fourth grade students (25 of 57) while using the division algorithm and investigated the effect of remediation on identified systematic errors. Results affirm that error pattern diagnosis and directed remediation lead to new learning and long-term retention. (MDH)
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Physical layer one-time-pad data encryption through synchronized semiconductor laser networks
NASA Astrophysics Data System (ADS)
Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris
2016-02-01
Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Liquid Medication Dosing Errors by Hispanic Parents: Role of Health Literacy and English Proficiency
Harris, Leslie M.; Dreyer, Benard; Mendelsohn, Alan; Bailey, Stacy C.; Sanders, Lee M.; Wolf, Michael S.; Parker, Ruth M.; Patel, Deesha A.; Kim, Kwang Youn A.; Jimenez, Jessica J.; Jacobson, Kara; Smith, Michelle; Yin, H. Shonna
2016-01-01
Objective Hispanic parents in the US are disproportionately affected by low health literacy and limited English proficiency (LEP). We examined associations between health literacy, LEP, and liquid medication dosing errors in Hispanic parents. Methods Cross-sectional analysis of data from a multisite randomized controlled experiment to identify best practices for the labeling/dosing of pediatric liquid medications (SAFE Rx for Kids study); 3 urban pediatric clinics. Analyses were limited to Hispanic parents of children <8 years, with health literacy and LEP data (n=1126). Parents were randomized to 5 groups that varied by pairing of units of measurement on the label/dosing tool. Each parent measured 9 doses [3 amounts (2.5,5,7.5 mL) using 3 tools (2 syringes (0.2,0.5 mL increment), 1 cup)] in random order. Dependent variable: Dosing error=>20% dose deviation. Predictor variables: health literacy (Newest Vital Sign) [limited=0–3; adequate=4–6], LEP (speaks English less than “very well”). Results 83.1% made dosing errors (mean(SD) errors/parent=2.2(1.9)). Parents with limited health literacy and LEP had the greatest odds of making a dosing error compared to parents with adequate health literacy who were English proficient (% trials with errors/parent=28.8 vs. 12.9%; AOR=2.2[1.7–2.8]). Parents with limited health literacy who were English proficient were also more likely to make errors (% trials with errors/parent=18.8%; AOR=1.4[1.1–1.9]). Conclusion Dosing errors are common among Hispanic parents; those with both LEP and limited health literacy are at particular risk. Further study is needed to examine how the redesign of medication labels and dosing tools could reduce literacy and language-associated disparities in dosing errors. PMID:28477800
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Effects of learning climate and registered nurse staffing on medication errors.
Chang, Yunkyung; Mark, Barbara
2011-01-01
Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.
NASA Astrophysics Data System (ADS)
He, Lidong; Anderson, Lissa C.; Barnidge, David R.; Murray, David L.; Hendrickson, Christopher L.; Marshall, Alan G.
2017-05-01
With the rapid growth of therapeutic monoclonal antibodies (mAbs), stringent quality control is needed to ensure clinical safety and efficacy. Monoclonal antibody primary sequence and post-translational modifications (PTM) are conventionally analyzed with labor-intensive, bottom-up tandem mass spectrometry (MS/MS), which is limited by incomplete peptide sequence coverage and introduction of artifacts during the lengthy analysis procedure. Here, we describe top-down and middle-down approaches with the advantages of fast sample preparation with minimal artifacts, ultrahigh mass accuracy, and extensive residue cleavages by use of 21 tesla FT-ICR MS/MS. The ultrahigh mass accuracy yields an RMS error of 0.2-0.4 ppm for antibody light chain, heavy chain, heavy chain Fc/2, and Fd subunits. The corresponding sequence coverages are 81%, 38%, 72%, and 65% with MS/MS RMS error 4 ppm. Extension to a monoclonal antibody in human serum as a monoclonal gammopathy model yielded 53% sequence coverage from two nano-LC MS/MS runs. A blind analysis of five therapeutic monoclonal antibodies at clinically relevant concentrations in human serum resulted in correct identification of all five antibodies. Nano-LC 21 T FT-ICR MS/MS provides nonpareil mass resolution, mass accuracy, and sequence coverage for mAbs, and sets a benchmark for MS/MS analysis of multiple mAbs in serum. This is the first time that extensive cleavages for both variable and constant regions have been achieved for mAbs in a human serum background.
Repeatability and Accuracy of Exoplanet Eclipse Depths Measured with Post-cryogenic Spitzer
NASA Astrophysics Data System (ADS)
Ingalls, James G.; Krick, J. E.; Carey, S. J.; Stauffer, John R.; Lowrance, Patrick J.; Grillmair, Carl J.; Buzasi, Derek; Deming, Drake; Diamond-Lowe, Hannah; Evans, Thomas M.; Morello, G.; Stevenson, Kevin B.; Wong, Ian; Capak, Peter; Glaccum, William; Laine, Seppo; Surace, Jason; Storrie-Lombardi, Lisa
2016-08-01
We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. We have re-analyzed an existing 4.5 μm data set, consisting of 10 observations of the XO-3b system during secondary eclipse, using seven different techniques for removing correlated noise. We find that, on average, for a given technique, the eclipse depth estimate is repeatable from epoch to epoch to within 156 parts per million (ppm). Most techniques derive eclipse depths that do not vary by more than a factor 3 of the photon noise limit. All methods but one accurately assess their own errors: for these methods, the individual measurement uncertainties are comparable to the scatter in eclipse depths over the 10 epoch sample. To assess the accuracy of the techniques as well as to clarify the difference between instrumental and other sources of measurement error, we have also analyzed a simulated data set of 10 visits to XO-3b, for which the eclipse depth is known. We find that three of the methods (BLISS mapping, Pixel Level Decorrelation, and Independent Component Analysis) obtain results that are within three times the photon limit of the true eclipse depth. When averaged over the 10 epoch ensemble, 5 out of 7 techniques come within 60 ppm of the true value. Spitzer exoplanet data, if obtained following current best practices and reduced using methods such as those described here, can measure repeatable and accurate single eclipse depths, with close to photon-limited results.
The (mis)reporting of statistical results in psychology journals.
Bakker, Marjan; Wicherts, Jelte M
2011-09-01
In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers' expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.
Random synaptic feedback weights support error backpropagation for deep learning
NASA Astrophysics Data System (ADS)
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-11-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
Random synaptic feedback weights support error backpropagation for deep learning
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-01-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044
Pricing Employee Stock Options (ESOs) with Random Lattice
NASA Astrophysics Data System (ADS)
Chendra, E.; Chin, L.; Sukmana, A.
2018-04-01
Employee Stock Options (ESOs) are stock options granted by companies to their employees. Unlike standard options that can be traded by typical institutional or individual investors, employees cannot sell or transfer their ESOs to other investors. The sale restrictions may induce the ESO’s holder to exercise them earlier. In much cited paper, Hull and White propose a binomial lattice in valuing ESOs which assumes that employees will exercise voluntarily their ESOs if the stock price reaches a horizontal psychological barrier. Due to nonlinearity errors, the numerical pricing results oscillate significantly so they may lead to large pricing errors. In this paper, we use the random lattice method to price the Hull-White ESOs model. This method can reduce the nonlinearity error by aligning a layer of nodes of the random lattice with a psychological barrier.
The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.
Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik
2014-11-11
Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.
Contamination of mercury in tongkat Ali hitam herbal preparations.
Ang, H H; Lee, K L
2006-08-01
The DCA (Drug Control Authority), Malaysia has implemented the phase three registration of traditional medicines on 1 January 1992. As such, a total of 100 products in various pharmaceutical dosage forms of a herbal preparation found in Malaysia, containing tongkat Ali hitam, either single or combined preparations, were analyzed for the presence of a heavy toxic metal, mercury, using atomic absorption spectrophotometer, after performing a simple random sampling to enable each sample an equal chance of being selected in an unbiased manner. Results showed that 26% of these products possessed 0.53-2.35 ppm of mercury, and therefore, do not comply with the quality requirement for traditional medicines in Malaysia. The quality requirement for traditional medicines in Malaysia is not exceeding 0.5 ppm for mercury. Out of these 26 products, four products have already registered with the DCA, Malaysia whilst the rest, however, have not registered with the DCA, Malaysia.
Responses of subjects with chronic obstructive pulmonary disease after exposures to 0. 3 ppm ozone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kehrl, H.R.; Hazucha, M.J.; Solic, J.J.
1985-05-01
The authors previously reported that the respiratory mechanics of intermittently exercising persons with chronic obstructive pulmonary disease (COPD) were unaffected by a 2-h exposure to 0.2 ppm ozone. Employing a single-blind, cross-over design protocol, 13 white men with nonreversible COPD (9 current smokers; mean FEV1/FVC, 56%) were randomly exposed on 2 consecutive days for 2 h to air and 0.3 ppm ozone. During exposures, subjects exercised (minute ventilation, 26.4 +/- 3.0 L/min) for 7.5 min every 30 min; ventilation and gas exchange measured during exercise showed no difference between exposure days. Pulmonary function tests (spirometry, body plethysmography) obtained before andmore » after exposures were unchanged on the air day. On the ozone day the mean airway resistance and specific airway resistance showed the largest (25 and 22%) changes (p = 0.086 and 0.058, respectively). Arterial oxygen saturation (SaO/sub 2/) obtained in 8 subjects during the last exercise interval showed a mean decrement of 0.95% on the ozone exposure day; this change did not attain significance (p = 0.074). Nevertheless, arterial oxygen desaturation may be a true consequence of low-level ozone exposure in this compromised patient group. As normal subjects undergoing exposures to ozone with slightly higher exercise intensities show a threshold for changes in their respiratory mechanics at approximately 0.3 ppm, these data indicate that persons with COPD are not unduly sensitive to the effects of low-level ozone exposure.« less
Non-parametric estimation of low-concentration benzene metabolism.
Cox, Louis A; Schnatter, A Robert; Boogaard, Peter J; Banton, Marcy; Ketelslegers, Hans B
2017-12-25
Two apparently contradictory findings in the literature on low-dose human metabolism of benzene are as follows. First, metabolism is approximately linear at low concentrations, e.g., below 10 ppm. This is consistent with decades of quantitative modeling of benzene pharmacokinetics and dose-dependent metabolism. Second, measured benzene exposure and metabolite concentrations for occupationally exposed benzene workers in Tianjin, China show that dose-specific metabolism (DSM) ratios of metabolite concentrations per ppm of benzene in air decrease steadily with benzene concentration, with the steepest decreases below 3 ppm. This has been interpreted as indicating that metabolism at low concentrations of benzene is highly nonlinear. We reexamine the data using non-parametric methods. Our main conclusion is that both findings are correct; they are not contradictory. Low-concentration metabolism can be linear, with metabolite concentrations proportional to benzene concentrations in air, and yet DSM ratios can still decrease with benzene concentrations. This is because a ratio of random variables can be negatively correlated with its own denominator even if the mean of the numerator is proportional to the denominator. Interpreting DSM ratios that decrease with air benzene concentrations as evidence of nonlinear metabolism is therefore unwarranted when plots of metabolite concentrations against benzene ppm in air show approximately straight-line relationships between them, as in the Tianjin data. Thus, an apparent contradiction that has fueled heated discussions in the recent literature can be resolved by recognizing that highly nonlinear, decreasing DSM ratios are consistent with linear metabolism. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
A measurement of G with a cryogenic torsion pendulum.
Newman, Riley; Bantel, Michael; Berg, Eric; Cross, William
2014-10-13
A measurement of Newton's gravitational constant G has been made with a cryogenic torsion pendulum operating below 4 K in a dynamic mode in which G is determined from the change in torsional period when a field source mass is moved between two orientations. The source mass was a pair of copper rings that produced an extremely uniform gravitational field gradient, whereas the pendulum was a thin fused silica plate, a combination that minimized the measurement's sensitivity to error in pendulum placement. The measurement was made using an as-drawn CuBe torsion fibre, a heat-treated CuBe fibre, and an as-drawn Al5056 fibre. The pendulum operated with a set of different large torsional amplitudes. The three fibres yielded high Q-values: 82 000, 120 000 and 164 000, minimizing experimental bias from fibre anelasticity. G-values found with the three fibres are, respectively: {6.67435(10),6.67408(15),6.67455(13)}×10(-11) m(3) kg(-1) s(-2), with corresponding uncertainties 14, 22 and 20 ppm. Relative to the CODATA2010 G-value, these are higher by 77, 37 and 107 ppm, respectively. The unweighted average of the three G-values, with the unweighted average of their uncertainties, is 6.67433(13)×10(-11) m(3) kg(-1) s(-2) (19 ppm). © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Blaya, J A; Shin, S S; Yale, G; Suarez, C; Asencios, L; Contreras, C; Rodriguez, P; Kim, J; Cegielski, P; Fraser, H S F
2010-08-01
To evaluate the impact of the e-Chasqui laboratory information system in reducing reporting errors compared to the current paper system. Cluster randomized controlled trial in 76 health centers (HCs) between 2004 and 2008. Baseline data were collected every 4 months for 12 months. HCs were then randomly assigned to intervention (e-Chasqui) or control (paper). Further data were collected for the same months the following year. Comparisons were made between intervention and control HCs, and before and after the intervention. Intervention HCs had respectively 82% and 87% fewer errors in reporting results for drug susceptibility tests (2.1% vs. 11.9%, P = 0.001, OR 0.17, 95%CI 0.09-0.31) and cultures (2.0% vs. 15.1%, P < 0.001, OR 0.13, 95%CI 0.07-0.24), than control HCs. Preventing missing results through online viewing accounted for at least 72% of all errors. e-Chasqui users sent on average three electronic error reports per week to the laboratories. e-Chasqui reduced the number of missing laboratory results at point-of-care health centers. Clinical users confirmed viewing electronic results not available on paper. Reporting errors to the laboratory using e-Chasqui promoted continuous quality improvement. The e-Chasqui laboratory information system is an important part of laboratory infrastructure improvements to support multidrug-resistant tuberculosis care in Peru.
[Exploration of the concept of genetic drift in genetics teaching of undergraduates].
Wang, Chun-ming
2016-01-01
Genetic drift is one of the difficulties in teaching genetics due to its randomness and probability which could easily cause conceptual misunderstanding. The “sampling error" in its definition is often misunderstood because of the research method of “sampling", which disturbs the results and causes the random changes in allele frequency. I analyzed and compared the definitions of genetic drift in domestic and international genetic textbooks, and found that the definitions containing “sampling error" are widely adopted but are interpreted correctly in only a few textbooks. Here, the history of research on genetic drift, i.e., the contributions of Wright, Fisher and Kimura, is introduced. Moreover, I particularly describe two representative articles recently published about genetic drift teaching of undergraduates, which point out that misconceptions are inevitable for undergraduates during the studying process and also provide a preliminary solution. Combined with my own teaching practice, I suggest that the definition of genetic drift containing “sampling error" can be adopted with further interpretation, i.e., “sampling error" is random sampling among gametes when generating the next generation of alleles which is equivalent to a random sampling of all gametes participating in mating in gamete pool and has no relationship with artificial sampling in general genetics studies. This article may provide some help in genetics teaching.
Dalpasquale, Giovanna; Delbem, Alberto Carlos Botazzo; Pessan, Juliano Pelim; Nunes, Gabriel Pereira; Gorup, Luiz Fernando; Neto, Francisco Nunes Souza; de Camargo, Emerson Rodrigues; Danelon, Marcelle
2017-06-01
This study evaluated the effect of toothpastes containing 1100 ppm F associated with nano-sized sodium hexametaphosphate (HMPnano) on enamel demineralization in vitro using a pH-cycling model. Bovine enamel blocks (4 mm × 4 mm, n = 72) selected by initial surface hardness (SHi) were randomly allocated into six groups (n = 12), according to the test toothpastes: without fluoride or HMPnano (Placebo), 550 ppm F (550F), 1100 ppm F (1100F), 1100F plus HMPnano at concentrations of 0.25% (1100F/0.25%HMPnano), 0.5% (1100F/0.5%HMPnano), and 1.0% (1100F/1.0%HMPnano). Blocks were treated 2×/day with slurries of toothpastes and submitted to five pH cycles (demineralizing/remineralizing solutions) at 37 °C. Next, final surface hardness (SHf), integrated loss subsurface hardness (ΔKHN), integrated mineral loss (g HA p × cm -3 ), and enamel fluoride (F) concentrations were determined. Data were analyzed by ANOVA and Student-Newman-Keuls test (p < 0.001). Toothpaste with 1100F/0.5%HMPnano led to the lowest mineral loss and the highest mineral concentration among all groups, which were 26% (SHf) and 21% (ΔKHN) lower and ~58% higher (g HA p × cm -3 ) when compared to 1100F (p < 0.001). Similar values of enamel F were observed for all fluoridated toothpastes (p > 0.001). The addition of 0.5%HMPnano to a 1100 F toothpaste significantly enhances its effects against enamel demineralization when compared to its counterpart without HMPnano in vitro. Toothpaste containing 1100 ppm F associated with HMPnano has a higher potential to reduce the demineralization compared to 1100 ppm F. This toothpaste could be a viable alternative to patients at high risk of caries.
Cemin, H S; Vieira, S L; Stefanello, C; Kindlein, L; Ferreira, T Z; Fireman, A K
2018-05-01
A study was conducted to evaluate growth performance, carcass and breast yields, and the occurrence and severity of white striping (WS) and wooden breast (WB) myopathies of broilers fed diets supplemented with increasing dietary levels of an organic source of selenium (Zn-L-SeMet). Broilers were fed 6 treatments with 12 replications of 26 birds in a 4-phase feeding program from 1 to 42 days. Corn-soy-based diets were supplemented with 0, 0.2, 0.4, 0.6, 0.8, and 1.0 ppm of Zn-L-SeMet. At 42 d, 6 birds were randomly selected from each pen (n = 72) and processed for carcass and breast yields. Breast fillets were scored for WS and WB at 42 days. Increasing Zn-L-SeMet led to quadratic responses (P < 0.05) for FCR from 1 to 7 d, BWG from 22 to 35 d, and for both responses from 8 to 21 d and 36 to 42 d, as well as in the overall period of 42 days. Carcass and breast yields presented a quadratic improvement (P < 0.01) with increasing Zn-L-SeMet supplementation and Se requirements were estimated at 0.85 and 0.86 ppm, respectively. In the overall period, estimates of Se requirements were 0.64 ppm for BWG and 0.67 ppm for FCR. White striping and WB scores presented quadratic increases (P < 0.01), and maximum scores were observed at 0.68 and 0.67 ppm, respectively. Broilers fed diets formulated without Se supplementation had a higher percentage of normal fillets compared to other Se supplementation levels (quadratic, P < 0.05). In conclusion, increasing Se supplementation to reach maximum growth performance led to higher degrees of severity of WS and WB. Selenium requirements determined in the present study were significantly higher than the present commercial recommendations.
Caecal microbiota of chickens fed diets containing propolis.
Eyng, C; Murakami, A E; Pedroso, A A; Duarte, C R A; Picoli, K P
2017-06-01
The present study aimed to evaluate the effect of different levels of ethanolic extract of propolis (EEP) and raw propolis (RP) on broiler performance and on selected bacterial groups in caecal microbiota using fluorescent in situ hybridization (FISH) measured by fluorescent activated cell sorting. Two experiments were conducted with 120 male chicks from 1 to 21 days of age for each, raised in cages and distributed in a completely randomized experimental design; there were five replicates with four birds per experimental unit and six treatments for each experiment (trial 1 - EEP - 0, 1000, 2000, 3000, 4000 and 5000 ppm and trial 2 - RP - 0, 100, 200, 300, 400 and 500 ppm). Fluorescent probes were used against the bacterial groups in caecal samples collected at 21 days of age. The data were subjected to one-way anova followed by Tukey's and regression analyses were used to assess the relationship between dietary levels of EEP or RP on performance and intestinal microbiota (p < 0.05). In the trial 1, results showed that the EEP did not cause any significant (p > 0.05) modification in the performance and caecal microbiota. In the trial 2, RP inclusion did not affect the performance but changed the bacterial composition (p < 0.05). Clostridiaceae, Gammaproteobacteria excluding Enterobacteriaceae and Lactobacillus spp. showed a quadratic response (p < 0.05), with the lowest value predicted to occur at 240 ppm, 221 ppm and 213 ppm of RP respectively. The proportion of Bacteroidaceae and Gammaproteobacteria did not differ (p > 0.05) among the experimental groups. The inclusion of ethanolic extract of propolis did not affect the performance and intestinal microbiota, whereas the supplementation of raw propolis modulates the caecal microbiota composition without any effects on chicken performance. Journal of Animal Physiology and Animal Nutrition © 2016 Blackwell Verlag GmbH.
Douglas-Escobar, Martha; Mendes, Monique; Rossignol, Candace; Bliznyuk, Nikolay; Faraji, Ariana; Ahmad, Abdullah S; Doré, Sylvain; Weiss, Michael D
2018-01-01
Objective: The objective of this pilot study was to start evaluating the efficacy and the safety (i.e., carboxyhemoglobin concentration of carbon monoxide (CO)) as a putative neuroprotective therapy in neonates. Study Design: Neonatal C57BL/6 mice were exposed to CO at a concentration of either 200 or 250 ppm for a period of 1 h. The pups were then sacrificed at 0, 10, 20, 60, 120, 180, and 240 min after exposure to either concentration of CO, and blood was collected for analysis of carboxyhemoglobin. Following the safety study, 7-day-old pups underwent a unilateral carotid ligation. After recovery, the pups were exposed to a humidified gas mixture of 8% oxygen and 92% nitrogen for 20 min in a hypoxia chamber. One hour after the hypoxia exposure, the pups were randomized to one of two groups: air (HI+A) or carbon monoxide (HI+CO). An inhaled dose of 250 ppm of CO was administered to the pups for 1 h per day for a period of 3 days. At 7 days post-injury, the pups were sacrificed and the brains analyzed for cortical and hippocampal volumes. Results: CO exposure at 200 and 250 ppm produced a peak carboxyhemoglobin concentration of 21.52 ± 1.18% and 27.55 ± 3.58%, respectively. The carboxyhemoglobin concentrations decreased rapidly, reaching control concentrations by 60 min post exposure. At 14 days of age (7 days post injury), the HI+CO (treated with 1 h per day of 250 ppm of CO for 3 days post injury) had significant preservation of the ratio of ipsilateral to contralateral cortex (median 1.07, 25% 0.97, 75% 1.23, n = 10) compared the HI+A group ( p < 0.05). Conclusion: CO exposure of 250 ppm did not reach carboxyhemoglobin concentrations which would induce acute neurologic abnormalities and was effective in preserving cortical volumes following hypoxic-ischemic injury.
Analysis of air quality in Dire Dawa, Ethiopia.
Kasim, Oluwasinaayomi Faith; Woldetisadik Abshare, Muluneh; Agbola, Samuel Babatunde
2017-12-07
Ambient air quality was monitored and analyzed to develop air quality index and its implications for livability and climate change in Dire Dawa, Ethiopia. Using survey research design, 16 georeferenced locations, representing different land uses, were randomly selected and assessed for sulfur dioxide (SO 2 ), nitrogen dioxide (NO 2 ), carbon dioxide (CO 2 ), carbon monoxide (CO),volatile organic compounds (VOCs), and meteorological parameters (temperature and relative humidity). The study found mean concentrations across all land uses for SO 2 of 0.37 ± 0.08 ppm, NO 2 of 0.13 ± 0.17 ppm, CO 2 of 465.65 ± 28.63 ppm, CO of 3.35 ± 2.04 ppm, and VOCs of 1850.67 ± 402 ppm. An air quality index indicated that ambient air quality for SO 2 was very poor, NO 2 ranged from moderate to very poor, whereas CO rating was moderate. Significant positive correlations existed between temperature and NO 2 , CO 2 , and CO and between humidity and VOCs. Significant relationships were also recorded between CO 2 and NO 2 and between CO and CO 2 . Poor urban planning, inadequate pollution control measure, and weak capacity to monitor air quality have implications for energy usage, air quality, and local meteorological parameters, with subsequent feedback into global climate change. Implementation of programs to monitor and control emissions in order to reduce air pollution will provide health, economic, and environmental benefits to the city. The need to develop and implement emission control programs to reduce air pollution in Dire Dawa City is urgent. This will provide enormous economic, health, and environmental benefits. It is expected that economic effects of air quality improvement will offset the expenditures for pollution control. Also, strategies that focus on air quality and climate change present a unique opportunity to engage different stakeholders in providing inclusive and sustainable development agenda for Dire Dawa.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Sen; Li, Guangjun; Wang, Maojie
The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors weremore » 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.« less
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
NASA Astrophysics Data System (ADS)
Thomas, A. V.; Pasteris, J. D.; Bray, C. J.; Spooner, E. T. C.
1990-03-01
Fluid inclusions in tourmaline and quartz from the footwall contact of the Tanco granitic pegmatite, S.E. Manitoba were studied using microthermometry (MT), laser Raman spectroscopy (LRS) and gas chromatography (GC). CH 4-bearing, aqueous inclusions occur in metasomatic tourmaline of the footwall amphibolite contact. The internal pressures estimated from MT are lower than those obtained from LRS (mean difference = 54 ± 19 bars). The difference is probably due to errors in the measurement of Th CH 4 (V) and to the presence of clathrate at Th CH 4 (V) into which CO 2 had been preferentially partitioned. LRS estimates of pressure (125-184 bars) are believed to be more accurate. Aqueous phase salinities based on LRS estimates of pressure are higher than those derived using the data from MT: 10-20 eq. wt% NaCl. The composition of the inclusions determined by GC bulk analysis is 97.3 mol% H 2O, 2.2 mol% CH 4, 0.4 mol% CO 2, 250 ppm C 2H 6, 130 ppm N 2, 33 ppm C 3H 8, 11 ppm C 2H 4, and 3 ppm C 3H 6, plus trace amounts of C 4 hydrocarbons. The composition is broadly similar to that calculated from MT (92% H 2O and 8% CH 4, with 7 eq. wt% NaCl dissolved in the aqueous phase and 2 mol% CO 2 dissolved in the CH 4 phase), as expected due to the dominance of a single generation of inclusions in the tourmaline. However, two important differences in composition are: (i) the CH 4 to CO 2 ratio of this fluid determined by GC is 5.33, which is significantly lower than that indicated by MT (49.0); and (ii) the H 2O content estimated from MT is 92 mol% compared to 98 mol% from GC. GC analyses may have been contaminated by the presence of secondary inclusions in the tourmaline. However, the rarity of the latter suggests that they cannot be completely responsible for the discrepancy. The differences may be accounted for by the presence of clathrate during measurement of Th CH 4 (critical), which would reduce CO 2 relative to CH 4 in the residual fluid, and by errors in visually estimating vol% H 2O. The compositions of the primary inclusions in tourmaline are unlike any of those found within the pegmatite and indicate that the fluid was externally derived, probably of metamorphic origin. Inclusions in quartz of the border unit of the pegmatite are secondary and are either aqueous (18 to 30 eq. wt% CaCl 2; Th total = 184 ± 14° C) or carbonic. Tm CO 2 for the carbonic inclusions ranges from -57.5 to -65.4°C and is positively correlated with Th CO 2. Analyses of X CH 4 based on LRS agree within 5 mol% of those derived from MT and together indicate a range of compositions from 5 to 50 mol% CH 4 in the CO 2 phase. Bulk analysis by GC gives 99.0 mol% H 2O, 0.6 mol% CO 2, 0.4 mol% CH 4, 160 ppm N 2, 7 ppm C 2H 6, 4 ppm C 3H 8, and 2 ppm C 2H 4, with trace amounts of COS (carbonyl sulphide) and C 3H 6. The level of H 2O in the analysis is consistent with the dominance of the aqueous inclusions in these samples, and the CH4: CO2 ratios are consistent with estimates from MT and LRS. The preservation of variable ratios of CH 4:CO 2 in inclusions < 50 μm apart indicates that neither H 2 diffusion out of the inclusions nor reduction of fluids leaving the pegmatite were responsible for the more oxidized chemistries of the border unit inclusions relative to those in the tourmaline of the metasomatised amphibolite. The compositions of the inclusions in the quartz lie between those of the fluid trapped by the tourmaline (externally derived) and the measured composition of a CO 2-bearing pegmatitic fluid, which indicates that the secondary fluids trapped in the border unit quartz were produced by late mixing.
Center of mass perception and inertial frames of reference.
Bingham, G P; Muchisky, M M
1993-11-01
Center of mass perception was investigated by varying the shape, size, and orientation of planar objects. Shape was manipulated to investigate symmetries as information. The number of reflective symmetry axes, the amount of rotational symmetry, and the presence of radial symmetry were varied. Orientation affected systematic errors. Judgments tended to undershoot the center of mass. Random errors increased with size and decreased with symmetry. Size had no effect on random errors for maximally symmetric objects, although orientation did. The spatial distributions of judgments were elliptical. Distribution axes were found to align with the principle moments of inertia. Major axes tended to align with gravity in maximally symmetric objects. A functional and physical account was given in terms of the repercussions of error. Overall, judgments were very accurate.
Seo, Hogyu David; Lee, Daeyoup
2018-05-15
Random mutagenesis of a target gene is commonly used to identify mutations that yield the desired phenotype. Of the methods that may be used to achieve random mutagenesis, error-prone PCR is a convenient and efficient strategy for generating a diverse pool of mutants (i.e., a mutant library). Error-prone PCR is the method of choice when a researcher seeks to mutate a pre-defined region, such as the coding region of a gene while leaving other genomic regions unaffected. After the mutant library is amplified by error-prone PCR, it must be cloned into a suitable plasmid. The size of the library generated by error-prone PCR is constrained by the efficiency of the cloning step. However, in the fission yeast, Schizosaccharomyces pombe, the cloning step can be replaced by the use of a highly efficient one-step fusion PCR to generate constructs for transformation. Mutants of desired phenotypes may then be selected using appropriate reporters. Here, we describe this strategy in detail, taking as an example, a reporter inserted at centromeric heterochromatin.
Wiemeyer, Stanley N.; Bunck, C.M.; Krynitsky, A.J.
1988-01-01
Osprey (Pandion haliaetus) eggs were collected in 14 states in 1970-79 and analyzed for organochlorine pesticides, polychlorinated biphenyls (PCBs), and mercury. Moderate shell thinning occurred in eggs from several areas. DDE was detected in all eggs, PCBs in 99%, DDD in 96%, dieldrin in 52%, and other compounds less frequently. Concentrations of DDT and its metabolites declined in eggs from Cape May County, New Jersey between 1970-72 and 1978-79. Eggs .from New Jersey in the early 1970s contained the highest concentrations of DDE. Dieldrin concentrations declined in eggs from the Potomac River, Maryland during 1971-77. Five different contaminants were significantly negatively correlated with shell thickness; DDE was most closely correlated. Ten percent shell thinning was associated with 2.0 ppm DDE, 15% with 4.2 ppm, and 20% with 8.7 ppm in eggs collected from randomly selected nests before egg loss. Shell thickness could not be accurately predicted from DDE concentrations in eggs collected after failure to hatch, presumably because the eggs with the thinnest shells had been broken and were unavailable for sampling. DDE was also significantly negatively correlated with brood size. Other contaminants did not appear to adversely affect shell thickness or reproductive success.
Olfactory recognition memory is disrupted in young mice with chronic low-level lead exposure
Flores-Montoya, Mayra Gisel; Alvarez, Juan Manuel; Sobin, Christina
2015-01-01
Chronic developmental lead exposure yielding very low blood lead burden is an unresolved child public health problem. Few studies have attempted to model neurobehavioral changes in young animals following very low level exposure, and studies are needed to identify tests that are sensitive to the neurobehavioral changes that may occur. Mechanisms of action are not yet known however results have suggested that hippocampus/dentate gyrus may be uniquely vulnerable to early chronic low-level lead exposure. This study examined the sensitivity of a novel odor recognition task to differences in pre-adolescent C57BL/6J mice chronically exposed from birth to PND 28, to 0 ppm (control), 30 ppm (low-dose), or 330 ppm (higher-dose) lead acetate (N = 33). Blood lead levels (BLLs) determined by ICP-MS ranged from 0.02 to 20.31 µg/dL. Generalized linear mixed model analyses with litter as a random effect showed a significant interaction of BLL × sex. As BLLs increased olfactory recognition memory decreased in males. Among females, non-linear effects were observed at lower but not higher levels of lead exposure. The novel odor detection task is sensitive to effects associated with early chronic low-level lead exposure in young C57BL/6J mice. PMID:25936521
Nutritional and sensory characteristics of sari tempe formulated from import soybean (glycine max)
NASA Astrophysics Data System (ADS)
Kurniadi, Muhamad; Andriani, Martina; Sari, Intan Indriana; Angwar, Mukhamad; Nurhayati, Rifa; Khasanah, Yuniar; Wiyono, Tri
2017-01-01
Tempe is traditional Indonesian food made from Rhizopus sp. fermentation of soybean. The aims of this research are to know the effect of the addition of water and CMC to nutritional and sensory characteristics of the sari tempe formulated from import soybean. The experimental design used in this study is entirely randomized design (CRD), which consists of two factors: variations addition of water with tempe (1:3, 1:5 and 1:7) and the variation of the addition of CMC concentration (0.05%; 0,10% and 0.15%). Sensory data were analyzed statistically using one-way ANOVA. If it showed significant results, then it is continued by real difference test using Duncan's Multiple Range Test (DMRT) at significance level α = 0.05. The results showed the best formula of sari tempe was F6 with 1:5 water ratio and 0.15% CMC concentration. Folate content and vitamin B6 decreased while processing sari tempe respectively 10.3 times and 2.7 times. Whereas, the vitamin B12 content is increased by 1.7 times. The best formula of sari tempe contains 90.96 % water content; 0.08 % ash content; 0.36 % fat content; 23.41 ppm vitamin B6; 337.49 ppm vitamin B12 and 17.31 ppm folate.
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2014-01-01
This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.
The accuracy of the measurements in Ulugh Beg's star catalogue
NASA Astrophysics Data System (ADS)
Krisciunas, K.
1992-12-01
The star catalogue compiled by Ulugh Beg and his collaborators in Samarkand (ca. 1437) is the only catalogue primarily based on original observations between the times of Ptolemy and Tycho Brahe. Evans (1987) has given convincing evidence that Ulugh Beg's star catalogue was based on measurements made with a zodiacal armillary sphere graduated to 15(') , with interpolation to 0.2 units. He and Shevchenko (1990) were primarily interested in the systematic errors in ecliptic longitude. Shevchenko's analysis of the random errors was limited to the twelve zodiacal constellations. We have analyzed all 843 ecliptic longitudes and latitudes attributed to Ulugh Beg by Knobel (1917). This required multiplying all the longitude errors by the respective values of the cosine of the celestial latitudes. We find a random error of +/- 17minp 7 for ecliptic longitude and +/- 16minp 5 for ecliptic latitude. On the whole, the random errors are largest near the ecliptic, decreasing towards the ecliptic poles. For all of Ulugh Beg's measurements (excluding outliers) the mean systematic error is -10minp 8 +/- 0minp 8 for ecliptic longitude and 7minp 5 +/- 0minp 7 for ecliptic latitude, with the errors in the sense ``computed minus Ulugh Beg''. For the brighter stars (those designated alpha , beta , and gamma in the respective constellations), the mean systematic errors are -11minp 3 +/- 1minp 9 for ecliptic longitude and 9minp 4 +/- 1minp 5 for ecliptic latitude. Within the errors this matches the systematic error in both coordinates for alpha Vir. With greater confidence we may conclude that alpha Vir was the principal reference star in the catalogues of Ulugh Beg and Ptolemy. Evans, J. 1987, J. Hist. Astr. 18, 155. Knobel, E. B. 1917, Ulugh Beg's Catalogue of Stars, Washington, D. C.: Carnegie Institution. Shevchenko, M. 1990, J. Hist. Astr. 21, 187.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Recycling and Mantle Stirring Determined by 142Nd/144Nd Isotopic Ratios
NASA Astrophysics Data System (ADS)
Jacobsen, S. B.; Ranen, M. C.
2004-12-01
It is now well established that 146Sm was live in the early solar system with an initial uniform 146Sm/144Sm ratio of ~0.008. Harper and Jacobsen (1992) discovered that a sample from Isua (~3.8 Ga old) had a positive 142Nd/144Nd anomaly of 33 ppm when compared to normal terrestrial and chondritic Nd. Furthermore, Jacobsen and Harper (1996) reported results from other Isua as well as Acasta (~4 Ga old) samples. Three other Isua samples had a possible small range (about -15 to +15), while two Acasta samples had no anomalies (normal to within 5 ppm). The presence of 142Nd anomalies at Isua has recently been confirmed by two other groups (Boyet et al. 2003; Caro et al. 2003). The available data demonstrate both the existence of early depleted mantle and that the early mantle was isotopically heterogeneous. As discussed by Jacobsen and Harper (1996), the recycling rate can be determined by tracing the decay of the average 142Nd/144Nd value of the depleted mantle. In addition, by using the 142Nd/144Nd heterogeneity in the depleted mantle through time we can determine the stirring rate of the mantle (Kellogg, Jacobsen and O'Connell, 2002) as a function of time. For this project our goal is to obtain a resolution in 142Nd/144Nd measurements of ~1 ppm. We have thus compared results obtained for the Nd isotope composition and 142Nd enriched standards for three different TIMS instruments: The Finnigan MAT 262 at Harvard, the Isoprobe-T and Finnigan TRITON mass spectrometers in GV Instrument's and Thermo Electron's demo laboratories in Manchester and Bremen, respectively. The Finnigan TRITON was designed in response to a request from the senior author for such an instrument. The results obtained so far demonstrate that all three instruments yield the same 142Nd/144Nd, 143Nd/144Nd and 145Nd/144Nd isotopic ratios to within a few ppm, while 148Nd/144Nd and 150Nd/144Nd ratios agree to within 10-20 ppm, when all ratios are normalized to 146Nd/144Nd using the exponential law. Due to the excellent agreement between results from three different instruments we conclude that other reports that claimed that such measurements could not be reproduced at the 5 ppm level for either our 15 year old MAT262 or the newer instruments must be in error. Acknowledgements: We thank GV Instruments and Thermo Electron Corporation for making measurements of our 142Nd enriched standards.
Random access to mobile networks with advanced error correction
NASA Technical Reports Server (NTRS)
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Zhang, Li-jie
2017-10-01
Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s
Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Tian, Yudong
2011-01-01
Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.
NASA Technical Reports Server (NTRS)
Crozier, Stewart N.
1990-01-01
Random access signaling, which allows slotted packets to spill over into adjacent slots, is investigated. It is shown that sloppy-slotted ALOHA can always provide higher throughput than conventional slotted ALOHA. The degree of improvement depends on the timing error distribution. Throughput performance is presented for Gaussian timing error distributions, modified to include timing error corrections. A general channel capacity lower bound, independent of the specific timing error distribution, is also presented.
Sevillano, David; Mínguez, Cristina; Sánchez, Alicia; Sánchez-Reyes, Alberto
2016-01-01
To obtain specific margin recipes that take into account the dosimetric characteristics of the treatment plans used in a single institution. We obtained dose-population histograms (DPHs) of 20 helical tomotherapy treatment plans for prostate cancer by simulating the effects of different systematic errors (Σ) and random errors (σ) on these plans. We obtained dosimetric margins and margin reductions due to random errors (random margins) by fitting the theoretical results of coverages for Gaussian distributions with coverages of the planned D99% obtained from the DPHs. The dosimetric margins obtained for helical tomotherapy prostate treatments were 3.3 mm, 3 mm, and 1 mm in the lateral (Lat), anterior-posterior (AP), and superior-inferior (SI) directions. Random margins showed parabolic dependencies, yielding expressions of 0.16σ(2), 0.13σ(2), and 0.15σ(2) for the Lat, AP, and SI directions, respectively. When focusing on values up to σ = 5 mm, random margins could be fitted considering Gaussian penumbras with standard deviations (σp) equal to 4.5 mm Lat, 6 mm AP, and 5.5 mm SI. Despite complex dose distributions in helical tomotherapy treatment plans, we were able to simplify the behaviour of our plans against treatment errors to single values of dosimetric and random margins for each direction. These margins allowed us to develop specific margin recipes for the respective treatment technique. The method is general and could be used for any treatment technique provided that DPHs can be obtained. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers
NASA Technical Reports Server (NTRS)
Ha, Eunho; North, Gerald R.
1995-01-01
Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun
2014-08-15
Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less
NASA Astrophysics Data System (ADS)
Osibanjo, Olabosipo O.
The objectives of this work are to calculate surface fluxes for rolling terrain using observational data collected during one week in September 2014 from a monitoring site in Echo, Oregon and to investigate the log law in the ABL. The site is located in the Columbia Basin with rolling terrain, irrigated farmland, and over 100 wind turbines. The 10 m tower was placed in a small valley depression to isolate nighttime temperature inversions. This thesis presents observations of momentum, sensible heat, moisture, and CO2 fluxes from data collected at a sampling frequency of 10Hz at four heights. Results show a strong correlation between temperature inversions and CO 2 flux. The log layer could not be achieved as the value of the estimated von Karman constant (˜0.62) is not close to that of the accepted value of 0.41. The impact of the irrigated farmland near the measurement site was observed in the latent heat flux, where the advection of moisture was evident in the tower moisture gradient. A strong relationship was also observed between fluxes of sensible heat, latent heat, CO2, and atmospheric stability. The average nighttime CO2 concentration observed was ˜407 ppm, and daytime ˜388 ppm compared to the 2013 global average CO2 concentration of 395 ppm. The maximum CO2 concentration (˜485 ppm) was observed on the strongest temperature inversion night. There are few uncertainties in the measurements. The manufacturer for the eddy covariance instruments (EC 150) quotes uncertainty of +/- 0.1°C for temperature between -0°C-40°C. Error bars were generated on the estimated surface sensible heat flux using the standard deviation and mean values. Under the most stable atmospheric conditions, uncertainty (assumed to be the variability in the flux estimates) was close to the minimum (˜+/- 5 W m-2). (Abstract shortened by ProQuest.).
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
77 FR 27130 - Ametoctradin; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-09
...; grape at 5.0 ppm; grape, raisin at 8 ppm; hop, dried cones at 9 ppm; onion, bulb, subgroup 3-07A at 1.2 ppm; onion, green, subgroup 3-07B at 16 ppm; vegetable, cucurbit, group 9 at 4.5 ppm; vegetable... 50 ppm; grape at 4.0 ppm; grape, raisin at 8.0 ppm; hop, dried cones at 10 ppm; onion, bulb, subgroup...
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
1998-09-01
Chloroprene is used almost exclusively in the manufacture of neoprene (polychloroprene). Chloroprene was chosen for study because it is a high-volume production chemical with limited information on its carcinogenic potential and because it is the 2-chloro analogue of 1,3-butadiene, a potent, multi-species, multi-organ carcinogen. Male and female F344/N rats and B6C3F1 mice were exposed to chloroprene (greater than 96% pure) by inhalation for 16 days, 13 weeks, or 2 years. Genetic toxicology studies were conducted in Salmonella typhimurium, Drosophila melanogaster, and B6C3F1 mice (bone marrow cells and peripheral blood erythrocytes). 16-Day Study in Rats: Groups of 10 male and 10 female F344/N rats were exposed to 0, 32, 80, 200, or 500 ppm chloroprene by inhalation, 6 hours per day, 5 days per week, for 16 days. Three 500 ppm males died on day 2 or 3 of the study. Mean body weight gains of 200 ppm males and females and 500 ppm females were significantly less than those of the chamber control groups. On the first day of exposure, rats exposed to 500 ppm were hypoactive and unsteady and had rapid shallow breathing. These effects were also observed to some degree in animals exposed to 200 ppm. After the second day of exposure, the effects in these groups worsened, and hemorrhage from the nose was observed. A normocytic, normochromic, responsive anemia; thrombocytopenia; and increases in serum activities of alanine aminotransferase, glutamate dehydrogenase, and sorbitol dehydrogenase occurred on day 4 in 200 ppm females and 500 ppm males. Kidney weights of 80 and 500 ppm females were significantly greater than those of the chamber control group, as were the liver weights of 200 and 500 ppm females. The incidences of minimal to mild olfactory epithelial degeneration of the nose in all exposed groups of males and females were significantly greater than those in the chamber control groups. The incidence of squamous metaplasia of the respiratory epithelium was significantly increased in 500 ppm males. The incidences of centrilobular to random hepatocellular necrosis in 500 ppm males and 200 ppm females were significantly greater than those in the chamber control groups. 16-Day Study in Mice: Groups of 10 male and 10 female B6C3F1 mice were exposed to 0, 12, 32, 80, or 200 ppm chloroprene by inhalation, 6 hours per day, 5 days per week, for 16 days. All males and females exposed to 200 ppm died on day 2 or day 3 of the study. Mean body weight gains of males exposed to 32 or 80 ppm were significantly less than that of the chamber control group. Mice exposed to 200 ppm exhibited narcosis during exposure and were hypoactive with reduced body tone after the first day of exposure. In general, hematology and clinical chemistry parameters measured for exposed males and females were similar to those of the chamber control groups. Thymus weights of 80 ppm males and females were significantly less than those of the chamber control groups. Liver weights of 80 ppm females were significantly greater than those of the chamber control groups. Increased incidences of multifocal random hepatocellular necrosis occurred in males and females exposed to 200 ppm. Hypertrophy of the myocardium, foci of hemorrhage, and mucosal erosion were observed in three males and three females exposed to 200 ppm. Squamous epithelial hyperplasia of the forestomach was observed in two males and two females exposed to 80 ppm. Thymic necrosis, characterized by karyorrhexis of thymic lymphocytes, was observed in all males and females in the 200 ppm groups. 13-Week Study in Rats: Groups of 10 male and 10 female F344/N rats were exposed to chloroprene at concentrations of 0, 5, 12, 32, 80, or 200 ppm by inhalation, 6 hours per day, 5 days per week, for 13 weeks. One male exposed to 200 ppm died during the study. The final mean body weights and body weight gains of all exposed groups of males and females were similar to those of the chamber control groups. Clinical findings in 200 ppm males included red or clear discharge around the nose and eye region. At week 13, a norm a normocytic, normochromic, and non-responsive anemia occurred in 200 ppm males and females. A thrombocytopenia occurred in 200 ppm males and females on day 2 and in 80 and 200 ppm females on day 22. However, at week 13, platelet counts rebounded and were minimally increased in 200 ppm males and females. On day 2, a minimal to mild increase in activated partial thromboplastin time and prothrombin time occurred in 200 ppm males and females. The 200 ppm males and females also had increased activities of serum alanine aminotransferase, glutamate dehydrogenase, and sorbitol dehydrogenase on day 22; these increases were transient, and by week 13 the serum activities of these enzymes were similar to those of the chamber controls. An alkaline phosphatase enzymeuria occurred in 200 ppm females on day 22; at week 13, an alkaline phosphatase enzymeuria oc-curred in 32, 80, and 200 ppm males and 200 ppm females. At week 13, a proteinuria occurred in 200 ppm males. Liver nonprotein sulfhydryl concentrations in male rats immediately following 1 day or 12 weeks of exposure to 200 ppm and in females exposed to 200 ppm for 12 weeks were significantly less than those of the chamber control groups. Kidney weights of 200 ppm males and females and 80 ppm females were significantly greater than those of the chamber control groups. Sperm motility of 200 ppm males was significantly less than that of the controls. In neurobehavioral assessments, horizontal activity was increased in male rats exposed to 32 ppm or greater and total activity was increased in 32 and 200 ppm males. Increased incidences of minimal to mild olfactory epithelial degeneration and respiratory metaplasia occurred in males and females exposed to 80 or 200 ppm. The incidence of olfactory epithelial degeneration in 32 ppm females was also significantly greater than that in the chamber control group. The incidence of hepatocellular necrosis in 200 ppm females was significantly greater than that in the chamber control group. Scattered chronic inflammation also occurred in the liver of male and female rats in the 200 ppm groups; the incidence in 200 ppm females was significantly greater than that in the chamber control group. The incidences of hemosiderin pigmentation were significantly increased in males and females exposed to 200 ppm. 13-Week Study in Mice: Groups of 10 male and 10 female B6C3F1 mice were exposed to chloroprene at concentrations of 0, 5, 12, 32, or 80 ppm by inhalation, 6 hours per day, 5 days per week, for 13 weeks. All male and female mice survived to the end of the study. The final mean body weight and body weight gain of males exposed to 80 ppm were significantly less than those of the chamber control group. Hematocrit concentrations of females exposed to 32 or 80 ppm and erythrocyte counts of 80 ppm females were significantly less than those of the chamber control group. Platelet counts of 32 and 80 ppm females were also greater than that of the chamber control group. Increased incidences of squamous epithelial hyperplasia of the forestomach occurred in males and females exposed to 80 ppm. 2-Year Study in Rats: Groups of 50 male and 50 female F344/N rats were exposed to chloroprene at concentrations of 0, 12.8, 32, or 80 ppm by inhalation, 6 hours per day, 5 days per week, for 2 years. Survival, Body Weights, and Clinical Findings: Survival of males exposed to 32 or 80 ppm was significantly less than that of the chamber control group. Mean body weights of males exposed to 80 ppm were less than those of the chamber controls after week 93. Masses of the torso were observed during the study in exposed female groups, and these clinical findings correlated with mammary gland fibroadenomas observed at necropsy. Pathology Findings: The incidences of squamous cell papilloma and squamous cell papilloma or squamous cell carcinoma (combined) of the oral cavity in male rats exposed to 32 ppm and male and female rats exposed to 80 ppm were significantly greater than those in the chamber controls and exceeded the historical control ranges. The incidences of thyroid gland follicular cell adenoma or carcinoma (combined) in males exposed to 32 or 80 ppm were significantly greater than that in the chamber control group and exceeded the historical control range. Although the incidences of follicular cell adenoma and follicular cell adenoma or carcinoma (combined) in 80 ppm females were not significantly greater than those in the chamber controls, they did exceed the historical control range for these neoplasms. The incidences of alveolar epithelial hyperplasia of the lung were significantly greater in all exposed groups of males and females than in the chamber control groups. The incidences of alveolar/bronchiolar carcinoma and alveolar/bronchiolar adenoma or carcinoma (combined) in 80 ppm males were slightly greater than those in the chamber control group. Although these neoplasm incidences were not significant, they exceeded the historical control range. The incidence of alveolar/bronchiolar adenoma, although not significant, was greater in 80 ppm females than in the chamber control group. The incidences of multiple fibroadenoma of the mammary gland in all exposed groups of females were greater than that in the chamber control group. The incidences of fibroadenoma (including multiple fibroadenoma) in 32 and 80 ppm females were significantly greater than that in the chamber controls. The incidences of fibroadenoma in the chamber control group and in all exposed groups of females exceeded the historical control range. The severity of nephropathy in exposed groups of male and female rats was slightly greater than in the chamber controls. Renal tubule adenoma and hyperplasia were observed in males and females. Additional kidney sections from male and female control and exposed rats were examined to provide a clearer indication of the potential effects of chloroprene on the kidney. The combined single- and step-section incidences of renal tubule hyperplasia in 32 and 80 ppm males and 80 ppm females and the incidences of adenoma and adenoma or carcinoma (combined) in all exposed males were significantly greater than those in the chamber controls. A slight increase in the incidence of transitional epithelium carcinoma of the urinary bladder was observed in 80 ppm females. In addition, one 32 ppm male had a transitional epithelium carcinoma and one 80 ppm male had a transitional cell papilloma. These findings are noteworthy because no urinary bladder neoplasms have been observed in chamber control male or female F344/N rats. In the nose, the incidences of atrophy, basal cell hyperplasia, metaplasia, and necrosis of the olfactory epithelium in 32 and 80 ppm males and females and of atrophy and necrosis in 12.8 ppm males were significantly greater than those in the chamber control groups. The incidences of chronic inflammation were significantly increased in males exposed to 12.8 or 32 ppm and in males and females exposed to 80 ppm. The incidences of fibrosis and adenomatous hyperplasia in 80 ppm males and females were significantly greater than those in the chamber controls. Generally, lesions in the nasal cavity were mild to moderate in severity. 2-Year Study in Mice: Groups of 50 male and 50 female B6C3F1 mice were exposed to chloroprene at concentrations of 0, 12.8, 32, or 80 ppm by inhalation, 6 hours per day, 5 days per week, for 2 years. Survival, Body Weights, and Clinical Findings: Survival of males exposed to 32 or 80 ppm and of all exposed female groups was significantly less than that of the chamber controls. The mean body weights of 80 ppm females were significantly less than those of the chamber control group after week 75. Clinical findings included masses of the head, which correlated with harderian gland adenoma and/or carcinoma in 32 ppm males and 80 ppm males and fe-males. Dorsal and lateral torso masses of female mice correlated with mammary gland neoplasms in 32 and 80 ppm females and subcutaneous sarcomas in 12.8, 32, and 80 ppm females. Pathology Findings: The incidences of alveolar/bronchiolar neoplasms in the lungs of all groups of exposed males and females were significantly greater than those in the chamber control groups and generally exceeded the historical control ranges. The incidences of multiple alveolar/bronchiolar adenoma and alveolar/bronchiolar carcinoma were increased in all exposed groups of males and females. The incidences of bronchiolar hyperplasia in all exposed groups of males and females were significantly greater than those in the chamber control groups. Male mice had a pattern of nonneoplastic liver lesions along with silver-staining helical organisms within the liver consistent with infection with Helicobacter hepaticus. An organism compatible with H. hepaticus was confirmed with a polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP)-based assay. In NTP studies with H. hepaticus-associated hepatitis, increased incidences of hemangiosarcoma have been seen in the livers of male mice. Therefore, hemangiosarcomas of the liver were excluded from the analyses of circulatory (endothelial) neoplasms in males in this study. Even with this exclusion, the combined occurrence of hemangioma or hemangiosarcoma at other sites was significantly increased at all chloroprene exposure concentrations in males and in 32 ppm females. Incidences of neoplasms at other sites in this study of chloroprene were not considered to have been significantly impacted by the infection with H. hepaticus or its associated hepatitis. The incidences of harderian gland adenoma and harderian gland adenoma or carcinoma (combined) in males exposed to 32 or 80 ppm and females exposed to 80 ppm were significantly greater than those in the chamber controls. The incidences of harderian gland adenoma or carcinoma (combined) in 32 ppm males and 80 ppm males and females exceeded the historical control ranges. The incidences of mammary gland carcinoma and adenoacanthoma or carcinoma (combined) in 80 ppm females were significantly greater than those in the chamber control group. The incidences of mammary gland carcinoma and of adenoacanthoma in 32 and 80 ppm females exceeded the historical control ranges. Multiple mammary gland carcinomas occurred in exposed females. The incidences of hepatocellular carcinoma in all exposed female groups and hepatocellular adenoma or carcinoma (combined) in 32 and 80 ppm females were significantly greater than those in the chamber controls; in the 80 ppm group, the incidence exceeded the historical control ranges for carcinoma and adenoma or carcinoma (combined). The incidence of eosinophilic foci in 80 ppm females was also significantly greater than that in the chamber controls. The incidences of sarcoma of the skin were significantly greater in all exposed groups of females than in the chamber controls. The incidences of sarcoma of the mesentery were also increased in all exposed groups of females. The incidence of squamous cell papilloma in 80 ppm females was greater than that in the chamber controls; the difference was not significant, but the incidence exceeded the historical control range. Males also showed a positive trend in the incidence of squamous cell papilloma of the forestomach. In males and females exposed to 80 ppm, the incidences of hyperplasia of the forestomach epithelium were significantly greater than those in the chamber controls. Carcinomas of the Zymbal's gland were seen in three 80 ppm females, and two carcinomas metastasized to the lung. Zymbal's gland carcinomas have not been reported in control female mice in the NTP historical database. The incidence of renal tubule adenoma in 80 ppm males was greater than that in the chamber controls. Though this difference was not significant, the incidence of this rare neoplasm exceeded the historical control range. The incidences of renal tubule hyperplasia in males exposed to 32 or 80 ppm were significantly greater than that in the chamber controls. Additional sections of kidney were examined from control and exposed males to verify these findings. The combined single- and step-section incidence of renal tubule adenoma in 80 ppm males and the combined incidences of renal tubule hyperplasia in all groups of exposed male mice were greater than those in the chamber controls. The incidences of olfactory epithelial atrophy, adenomatous hyperplasia, and metaplasia in 80 ppm males and females were significantly greater than those in the chamber controls. The incidences of hematopoietic proliferation of the spleen in 32 and 80 ppm males and in all groups of exposed females were significantly greater than those in the chamber controls. Genetic Toxicology: Chloroprene was not mutagenic in any of the tests performed by the NTP. No induction of mutations was noted in any of four strains of S. typhimurium in the presence or the absence of S9 metabolic activation enzymes, and no induction of sex-linked recessive lethal mutations was observed in germ cells of male D. melanogaster treated with chloroprene via feeding or injection. In male mice exposed to chloroprene by inhalation for 12 days over a 16-day period, no induction of chromosomal aberrations, sister chromatid exchanges, or micronucleated erythrocytes in bone marrow or peripheral blood occurred. Results of a second micronucleus assay in male and female mice after 13 weeks of exposure to chloroprene via inhalation were also negative. Conclusion: Under the conditions of these 2-year inhalation studies, there was clear evidence of carcinogenic activity of chloroprene in male F344/N rats based on increased incidences of neoplasms of the oral cavity; increased incidences of neoplasms of the thyroid gland, lung, and kidney were also attributed to chloroprene exposure. There was clear evidence of carcinogenic activity of chloroprene in female F344/N rats based on increased incidences of neoplasms of the oral cavity; increased incidences of neoplasms of the thyroid gland, mammary gland, and kidney were also attributed to exposure to chloroprene. Low incidences of urinary bladder neoplasms in male and female rats and lung neoplasms in female rats may also have been related to exposure to chloroprene. There was clear evidence of carcinogenic activity of chloroprene in male B6C3F1 mice based on increased incidences of neoplasms of the lung, circulatory system (hemangiomas and hemangiosarcomas), and harderian gland; increased incidences of neoplasms of the forestomach and kidney were also attributed to exposure to chloroprene. There was clear evidence of carcinogenic activity of chloroprene in female B6C3F1 mice based on increased incidences of neoplasms of the lung, circulatory system (hemangiomas and hemangiosarcomas), harderian gland, mammary gland, liver, skin, and mesentery; increased incidences of neoplasms of the forestomach and Zymbal's gland were also attributed to exposure to chloroprene. Exposure of male and female rats to chloroprene was associated with increased incidences of alveolar epithelial hyperplasia in the lung; nephropathy; and several nonneoplastic effects in the nose including olfactory epithelial atrophy, fibrosis, adenomatous hyperplasia, basal cell hyperplasia, chronic inflammation, respiratory metaplasia, and necrosis. Exposure of male and female mice to chloroprene was associated with increased incidences of bronchiolar hyperplasia and histiocytic cell infiltration in the lung; epithelial hyperplasia in the forestomach; renal tubule hyperplasia (males only); several effects in the nose including olfactory epithelial atrophy, respiratory metaplasia, and adenomatous hyperplasia; and hematopoietic cell proliferation in the spleen. Synonyms: Chlorobutadiene, 2-chlorobuta-1,3-diene, 2-chloro-1,3-butadiene, -chloroprene
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-23
... Brassica vegetables) (crop group 4) at 15 ppm; milk at 0.01 ppm; milk, fat at 0.04 ppm; oilseeds, except...) at 8 ppm; cattle, fat at 0.01 ppm; cattle, liver at 0.04 ppm; cattle, meat at 0.01 ppm; cattle, meat... group 8-10) at 2 ppm; goat, fat at 0.01 ppm; goat, liver at 0.04 ppm; goat, meat at 0.01 ppm; goat, meat...
NASA Astrophysics Data System (ADS)
Pengvanich, P.; Chernin, D. P.; Lau, Y. Y.; Luginsland, J. W.; Gilgenbach, R. M.
2007-11-01
Motivated by the current interest in mm-wave and THz sources, which use miniature, difficult-to-fabricate circuit components, we evaluate the statistical effects of random fabrication errors on a helix traveling wave tube amplifier's small signal characteristics. The small signal theory is treated in a continuum model in which the electron beam is assumed to be monoenergetic, and axially symmetric about the helix axis. Perturbations that vary randomly along the beam axis are introduced in the dimensionless Pierce parameters b, the beam-wave velocity mismatch, C, the gain parameter, and d, the cold tube circuit loss. Our study shows, as expected, that perturbation in b dominates the other two. The extensive numerical data have been confirmed by our analytic theory. They show in particular that the standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C, and d. Simple formulas have been derived which yield the output phase variations in terms of the statistical random manufacturing errors. This work was supported by AFOSR and by ONR.
Accuracy of indirect estimation of power output from uphill performance in cycling.
Millet, Grégoire P; Tronche, Cyrille; Grappe, Frédéric
2014-09-01
To use measurement by cycling power meters (Pmes) to evaluate the accuracy of commonly used models for estimating uphill cycling power (Pest). Experiments were designed to explore the influence of wind speed and steepness of climb on accuracy of Pest. The authors hypothesized that the random error in Pest would be largely influenced by the windy conditions, the bias would be diminished in steeper climbs, and windy conditions would induce larger bias in Pest. Sixteen well-trained cyclists performed 15 uphill-cycling trials (range: length 1.3-6.3 km, slope 4.4-10.7%) in a random order. Trials included different riding position in a group (lead or follow) and different wind speeds. Pmes was quantified using a power meter, and Pest was calculated with a methodology used by journalists reporting on the Tour de France. Overall, the difference between Pmes and Pest was -0.95% (95%CI: -10.4%, +8.5%) for all trials and 0.24% (-6.1%, +6.6%) in conditions without wind (<2 m/s). The relationship between percent slope and the error between Pest and Pmes were considered trivial. Aerodynamic drag (affected by wind velocity and orientation, frontal area, drafting, and speed) is the most confounding factor. The mean estimated values are close to the power-output values measured by power meters, but the random error is between ±6% and ±10%. Moreover, at the power outputs (>400 W) produced by professional riders, this error is likely to be higher. This observation calls into question the validity of releasing individual values without reporting the range of random errors.
NASA Technical Reports Server (NTRS)
Abshire, J. B.; Weaver, C. J.; Riris, H.; Mao, J.; Sun, X; Allan, G. R.; Hasselbrack, W. E.; Browell, E. V.
2012-01-01
We have developed a pulsed lidar technique for measuring the tropospheric CO2 concentrations as a candidate for NASA's ASCENDS mission and have demonstrated the CO2 and O2 measurements from aircraft. Our technique uses two pulsed lasers allowing simultaneous measurement of a single CO2 absorption line near 1572 nm, O2 extinction in the Oxygen A-band, surface height and backscatter profile. The lasers are stepped in wavelength across the CO2 line and an O2 line doublet during the measurement. The column densities for the CO2 and O2 are estimated from the differential optical depths (DOD) of the scanned absorption lines via the IPDA technique. For the 2009 ASCENDS campaign we flew the CO2 lidar on a Lear-25 aircraft, and measured the absorption line shapes of the CO2 line using 20 wavelength samples per scan. Measurements were made at stepped altitudes from 3 to 12.6 km over the Lamont OK, central Illinois, North Carolina, and over the Virginia Eastern Shore. Although the received signal energies were weaker than expected for ASCENDS, clear CO2 line shapes were observed at all altitudes. Most flights had 5-6 altitude steps with 200-300 seconds of recorded measurements per step. We averaged every 10 seconds of measurements and used a cross-correlation approach to estimate the range to the scattering surface and the echo pulse energy at each wavelength. We then solved for the best-fit CO2 absorption line shape, and calculated the DOD of the fitted CO2 line, and computed its statistics at the various altitude steps. We compared them to CO2 optical depths calculated from spectroscopy based on HITRAN 2008 and the column number densities calculated from the airborne in-situ readings. The 2009 measurements have been analyzed and they were similar on all flights. The results show clear CO2 line shape and absorption signals, which follow the expected changes with aircraft altitude from 3 to 13 km. They showed the expected nearly the linear dependence of DOD vs altitude. The measurements showed 1 ppm random errors for 8-10 km altitudes and 30 sec averaging times. For the 2010 ASCENDS campaigns we flew the CO2lidar on the NASA DC-8 and added an 02lidar channel. During July 2010 we made measurements of CO2 and O2 column absorption during longer flights over Railroad Valley NV, the Pacific Ocean and over Lamont OK. CO2 measurements were made with 30 steps/scan, 300 scans/sec and improved line resolution and receiver sensitivity. Analysis of the 2010 CO2 measurements shows the expected linear change of DOD with altitude. For measurements at altitudes> 6 km the random errors were 0.3 ppm for 80 sec averaging times. For the summer 2011 ASCENDS campaigns we made further improvements to the lidar's CO2 line scan and receiver sensitivity. We demonstrated measurements over the California Central Valley, to stratus cloud tops over the Pacific Ocean, over mountain regions with snow, and over several areas with broken clouds. Details of the lidar measurements and their analysis will be described in the presentation.
NASA Technical Reports Server (NTRS)
Abshire, J. B.; Weaver, C. J.; Riris, H.; Mao, J.; Sun, X.; Allan, G. R.; Hasselbrack, W. E.; Browell, E. V.
2012-01-01
We have developed a pulsed lidar technique for measuring the tropospheric CO2 concentrations as a candidate for NASA's ASCENDS mission and have demonstrated the CO2 and O2 measurements from aircraft. Our technique uses two pulsed lasers allowing simultaneous measurement of a single CO2 absorption line near 1572 nm, O2 extinction in the Oxygen A-band, surface height and backscatter profile. The lasers are stepped in wavelength across the CO2 line and an O2 line doublet during the measurement. The column densities for the CO2 and O2 are estimated from the differential optical depths (DOD) of the scanned absorption lines via the IPDA technique. For the 2009 ASCENDS campaign we flew the CO2 lidar only on a Lear-25 aircraft, and measured the absorption line shapes of the CO2 line using 20 wavelength samples per scan. Measurements were made at stepped altitudes from 3 to 12.6 km over the Lamont OK, central Illinois, North Carolina, and over the Virginia Eastern Shore. Although the received signal energies were weaker than expected for ASCENDS, clear C02 line shapes were observed at all altitudes. Most flights had 5-6 altitude steps with 200-300 seconds of recorded measurements per step. We averaged every 10 seconds of measurements and used a cross-correlation approach to estimate the range to the scattering surface and the echo pulse energy at each wavelength. We then solved for the best-fit CO2 absorption line shape, and calculated the DOD of the fitted CO2 line, and computed its statistics at the various altitude steps. We compared them to CO2 optical depths calculated from spectroscopy based on HITRAN 2008 and the column number densities calculated from the airborne in-situ readings. The 2009 measurements have been analyzed in detail and they were similar on all flights. The results show clear CO2 line shape and absorption signals, which follow the expected changes with aircraft altitude from 3 to 13 km. They showed the expected nearly the linear dependence of DOD vs altitude. The measurements showed -1 ppm random errors for 8-10 km altitudes and -30 sec averaging times. For the 2010 ASCENDS campaigns we flew the CO2 lidar on the NASA DC-8 and added an O2 lidar channel. During July 2010 we made measurements of CO2 and O2 column absorption during longer flights over Railroad Valley NV, the Pacific Ocean and over Lamont OK. CO2 measurements were made with 30 steps/scan, 300 scans/sec and improved line resolution and receiver sensitivity. Analysis of the 2010 CO2 measurements shows the expected -linear change of DOD with altitude. For measurements at altitudes> 6 km the random errors were 0.3 ppm for 80 sec averaging times. For the summer 2011 ASCENDS campaigns we made further improvements to the lidar's CO2 line scan and receiver sensitivity. The seven flights in the 2011 Ascends campaign were flown over a wide variety of surface and cloud conditions in the US, which produced a wide variety of lidar signal conditions. Details of the lidar measurements and their analysis will be described in the presentation.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
... 5.5 parts per million (ppm); potato, wet peel at 6.0 ppm; potato, whole at 2.0 ppm; cattle, fat at 0.2 ppm; cattle, meat at 0.02 ppm; cattle, meat byproducts, except fat at 0.02 ppm; goat, fat at 0.2 ppm; goat, meat at 0.02 ppm; goat, meat byproducts, except fat at 0.02 ppm; horse, fat at 0.2 ppm...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Numerical Error Estimation with UQ
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Korn, Peter; Marotzke, Jochem
2014-05-01
Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted
Creeth, Jonathan E; Karwal, Ritu; Hara, Anderson T; Zero, Domenick T
2018-01-01
This study aimed to determine the effect of zinc ions and F concentration in a dentifrice on remineralization of early caries lesions in situ and on resistance to subsequent demineralization. This was a single-center, 6-period, 6-product, blinded (examiner, subject, analyst), randomized (n = 62), crossover study. Products (all NaF) were: 0, 250, 1,150 and 1,426 ppm F (dose-response controls), "Zn-A" (0.3% ZnCl2, 1,426 ppm F), and "Zn-B" (as Zn-A, with high-foaming surfactants) in a conventional silica base. Subjects wore palatal appliances holding partially demineralized bovine enamel specimens. They brushed their teeth with 1.5 g test dentifrice (25 s), then swished the slurry ensuring even exposure of specimens (95 s), expectorated, and rinsed (15 mL water, 10 s). After 4 h intraoral remineralization, specimens were removed and acid-challenged in vitro. Surface microhardness (SMH), measured pre-experimental, post-initial acid exposure, post-remineralization, and post-second acid exposure, was used to calculate recovery (SMHR), net acid resistance (NAR), and a new, specifically demineralization-focused calculation, "comparative acid resistance" (CAR). Enamel fluoride uptake (EFU) was also measured. For the F dose-response controls, all measures showed significant relationships with dentifrice F concentration (p < 0.0001). The presence of zinc counteracted the ability of F to promote remineralization in this model. Compared to the 1,426 ppm F control, the zinc formulations gave reduced SMHR, EFU, and NAR (all p < 0.0001); however, they showed evidence of increased CAR (Zn-A: p = 0.0040; Zn-B: p = 0.0846). Products were generally well tolerated. In this study, increasing dentifrice F concentration progressively increased in situ remineralization and demineralization resistance of early caries enamel lesions. Zinc ions reduced remineralization but could increase demineralization resistance. © 2018 S. Karger AG, Basel.
Intramyocellular lipid quantification: repeatability with 1H MR spectroscopy.
Torriani, Martin; Thomas, Bijoy J; Halpern, Elkan F; Jensen, Megan E; Rosenthal, Daniel I; Palmer, William E
2005-08-01
To prospectively determine the repeatability and variability of tibialis anterior intramyocellular lipid (IMCL) quantifications performed by using 1.5-T hydrogen 1 (1H) magnetic resonance (MR) spectroscopy in healthy subjects. Institutional review board approval and written informed consent were obtained for this Health Insurance Portability and Accountability Act-compliant study. The authors examined the anterior tibial muscles of 27 healthy subjects aged 19-48 years (12 men, 15 women; mean age, 25 years) by using single-voxel short-echo-time point-resolved 1H MR spectroscopy. During a first visit, the subjects underwent 1H MR spectroscopy before and after being repositioned in the magnet bore, with voxels carefully placed on the basis of osseous landmarks. Measurements were repeated after a mean interval of 12 days. All spectra were fitted by using Java-based MR user interface (jMRUI) and LCModel software, and lipid peaks were scaled to the unsuppressed water peak (at 4.7 ppm) and the total creatine peak (at approximately 3.0 ppm). A one-way random-effects variance components model was used to determine intraday and intervisit coefficients of variation (CVs). A power analysis was performed to determine the detectable percentage change in lipid measurements for two subject sample sizes. Measurements of the IMCL methylene protons peak at a resonance of 1.3 ppm scaled to the unsuppressed water peak (IMCL(W)) that were obtained by using jMRUI software yielded the lowest CVs overall (intraday and intervisit CVs, 13.4% and 14.4%, respectively). The random-effects variance components model revealed that nonbiologic factors (equipment and repositioning) accounted for 50% of the total variability in IMCL quantifications. Power analysis for a sample size of 20 subjects revealed that changes in IMCL(W) of greater than 15% could be confidently detected between 1H MR spectroscopic measurements obtained on different days. 1H MR spectroscopy is feasible for repeatable quantification of IMCL concentrations in longitudinal studies of muscle metabolism.
Cembrowski, G S; Hackney, J R; Carey, N
1993-04-01
The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Furlan, Leonardo; Sterr, Annette
2018-01-01
Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de
2014-12-15
This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on finite arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from assistance by distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the unassisted capacity does have discontinuity points, while it is known that the randomness-assisted capacity is always continuousmore » in the channel. We characterize the discontinuity points and prove that the unassisted capacity is always continuous around its positivity points. After having established shared randomness as an important resource, we quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) probability of a decoding error with respect to the average error criterion and the number of messages that can be sent over a finite number of channel uses. We relate our results to the entanglement transmission capacities of finite AVQCs, where the role of shared randomness is not yet well understood, and give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish.« less
Noise in two-color electronic distance meter measurements revisited
Langbein, J.
2004-01-01
Frequent, high-precision geodetic data have temporally correlated errors. Temporal correlations directly affect both the estimate of rate and its standard error; the rate of deformation is a key product from geodetic measurements made in tectonically active areas. Various models of temporally correlated errors are developed and these provide relations between the power spectral density and the data covariance matrix. These relations are applied to two-color electronic distance meter (EDM) measurements made frequently in California over the past 15-20 years. Previous analysis indicated that these data have significant random walk error. Analysis using the noise models developed here indicates that the random walk model is valid for about 30% of the data. A second 30% of the data can be better modeled with power law noise with a spectral index between 1 and 2, while another 30% of the data can be modeled with a combination of band-pass-filtered plus random walk noise. The remaining 10% of the data can be best modeled as a combination of band-pass-filtered plus power law noise. This band-pass-filtered noise is a product of an annual cycle that leaks into adjacent frequency bands. For time spans of more than 1 year these more complex noise models indicate that the precision in rate estimates is better than that inferred by just the simpler, random walk model of noise.
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Effect of phase errors in stepped-frequency radar systems
NASA Astrophysics Data System (ADS)
Vanbrundt, H. E.
1988-04-01
Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.
Michael, Claire W; Naik, Kalyani; McVicker, Michael
2013-05-01
We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, K.; Ohmi, K.; Tottori University Electronic Display Research Center, 101 Minami4-chome, Koyama-cho, Tottori-shi, Tottori 680-8551
With increasing density of memory devices, the issue of generating soft errors by cosmic rays is becoming more and more serious. Therefore, the irradiation resistance of resistance random access memory (ReRAM) to cosmic radiation has to be elucidated for practical use. In this paper, we investigated the data retention characteristics of ReRAM against ultraviolet irradiation with a Pt/NiO/ITO structure. Soft errors were confirmed to be caused by ultraviolet irradiation in both low- and high-resistance states. An analysis of the wavelength dependence of light irradiation on data retention characteristics suggested that electronic excitation from the valence to the conduction band andmore » to the energy level generated due to the introduction of oxygen vacancies caused the errors. Based on a statistically estimated soft error rates, the errors were suggested to be caused by the cohesion and dispersion of oxygen vacancies owing to the generation of electron-hole pairs and valence changes by the ultraviolet irradiation.« less
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.
2016-01-01
Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul
2014-01-01
Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr
Analysis of space telescope data collection system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.
Modeling methodology for MLS range navigation system errors using flight test data
NASA Technical Reports Server (NTRS)
Karmali, M. S.; Phatak, A. V.
1982-01-01
Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.
Acute Exposure to Low-to-Moderate Carbon Dioxide Levels and Submariner Decision Making.
Rodeheffer, Christopher D; Chabal, Sarah; Clarke, John M; Fothergill, David M
2018-06-01
Submarines routinely operate with higher levels of ambient carbon dioxide (CO2) (i.e., 2000 - 5000 ppm) than what is typically considered normal (i.e., 400 - 600 ppm). Although significant cognitive impairments are rarely reported at these elevated CO2 levels, recent studies using the Strategic Management Simulation (SMS) test have found impairments in decision-making performance during acute CO2 exposure at levels as low as 1000 ppm. This is a potential concern for submarine operations, as personnel regularly make mission-critical decisions that affect the safety and efficiency of the vessel and its crew while exposed to similar levels of CO2. The objective of this study was to determine if submariner decision-making performance is impacted by acute exposure to levels of CO2 routinely present in the submarine atmosphere during sea patrols. Using a subject-blinded balanced design, 36 submarine-qualified sailors were randomly assigned to receive 1 of 3 CO2 exposure conditions (600, 2500, or 15,000 ppm). After a 45-min atmospheric acclimation period, participants completed an 80-min computer-administered SMS test as a measure of decision making. There were no significant differences for any of the nine SMS measures of decision making between the CO2 exposure conditions. In contrast to recent research demonstrating cognitive deficits on the SMS test in students and professional-grade office workers, we were unable to replicate this effect in a submariner population-even with acute CO2 exposures more than an order of magnitude greater than those used in previous studies that demonstrated such effects.Rodeheffer CD, Chabal S, Clarke JM, Fothergill DM. Acute exposure to low-to-moderate carbon dioxide levels and submariner decision making. Aerosp Med Hum Perform. 2018; 89(6):520-525.
Effect of toothpaste with nano-sized trimetaphosphate on dental caries: In situ study.
Danelon, Marcelle; Pessan, Juliano Pelim; Neto, Francisco Nunes Souza; de Camargo, Emerson Rodrigues; Delbem, Alberto Carlos Botazzo
2015-07-01
This in situ study was to evaluate the remineralizing effect of a fluoride toothpaste supplemented with nano-sized sodium trimetaphosphate (TMP). This blind and cross-over study was performed in 4 phases of 3 days each. Twelve subjects used palatal appliances containing four bovine enamel blocks with artificial caries lesions. Volunteers were randomly assigned into the following treatment groups: Placebo (without F and TMP); 1100 ppm F (1100), 1100 supplemented with 3% micrometric TMP (1100 TMP) and with nano-sized TMP (1100 TMPnano). Volunteers were instructed to brush their natural teeth with the palatal appliances in the mouth during 1min (3 times/day), so that blocks were treated with natural slurries of toothpastes. After each phase, the percentage of surface hardness recovery (%SHR), integrated mineral recovery (IMR) and integrated differential mineral area profile (ΔIMR) in enamel lesions were calculated. F in enamel was also determined. Data were analyzed by ANOVA and Student-Newman-Keuls test. Enamel surface became 20% harder when treated with 1100 TMPnano in comparison with 1100 (p<0.001). 1100 TMPnano showed remineralizing capacity (IMR; ΔIMR) 66% higher when compared with 1100 TMP (p<0.001). Enamel F uptake in the 1100 TMPnano group was 2-fold higher when compared to its counterpart without TMP (p<0.001). The addition of 3% TMPnano to a conventional toothpaste was able to promote an additional remineralizing effect of artificial caries lesions. Toothpaste containing 1100 ppm F associated with TMPnano showed a potential of higher remineralization to 1100 ppm F and 1100 ppm F micrometric TMP. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bahrololoomi, Zahra; Sorouri, Milad
2015-01-01
Objectives: Fluoride therapy is important for control and prevention of dental caries. Laser irradiation can increase fluoride uptake especially when combined with topical fluoride application. The objective of this study was to compare the effects of CO2 and diode lasers on enamel fluoride uptake in primary teeth. Materials and Methods: Forty human primary molars were randomly assigned to four groups (n=10). The roots were removed and the crowns were sectioned mesiodistally into buccal and lingual halves as the experimental and control groups. All samples were treated with 5% sodium fluoride (NaF) varnish. The experimental samples in the four groups were irradiated with 5 or 7W diode or 1 or 2W CO2 laser for 15 seconds and were compared with the controls in terms of fluoride uptake, which was determined using an ion selective electrode after acid dissolution of the specimens. Data were analyzed by SPSS version 16 using ANOVA treating the control measurements as covariates. Results: The estimated amount of fluoride uptake was 59.5± 16.31 ppm, 66.5± 14.9 ppm, 78.6± 12.43 ppm and 90.4± 11.51 ppm for 5W and 7 W diode and 1W and 2 W CO2 lasers, respectively, which were significantly greater than the values in the conventional topical fluoridation group (P<0.005). There were no significant differences between 7W diode laser and 1W CO2 laser, 5W and 7W diode laser, or 1W and 2W CO2 laser in this regard. Conclusion: The results showed that enamel surface irradiation by CO2 and diode lasers increases the fluoride uptake. PMID:27123018
NASA Astrophysics Data System (ADS)
Sun, Xiaole; Djordjevic, Ivan B.; Neifeld, Mark A.
2016-03-01
Free-space optical (FSO) channels can be characterized by random power fluctuations due to atmospheric turbulence, which is known as scintillation. Weak coherent source based FSO quantum key distribution (QKD) systems suffer from the scintillation effect because during the deep channel fading the expected detection rate drops, which then gives an eavesdropper opportunity to get additional information about protocol by performing photon number splitting (PNS) attack and blocking single-photon pulses without changing QBER. To overcome this problem, in this paper, we study a large-alphabet QKD protocol, which is achieved by using pulse-position modulation (PPM)-like approach that utilizes the time-frequency uncertainty relation of the weak coherent photon state, called here TF-PPM-QKD protocol. We first complete finite size analysis for TF-PPM-QKD protocol to give practical bounds against non-negligible statistical fluctuation due to finite resources in practical implementations. The impact of scintillation under strong atmospheric turbulence regime is studied then. To overcome the secure key rate performance degradation of TF-PPM-QKD caused by scintillation, we propose an adaptation method for compensating the scintillation impact. By changing source intensity according to the channel state information (CSI), obtained by classical channel, the adaptation method improves the performance of QKD system with respect to the secret key rate. The CSI of a time-varying channel can be predicted using stochastic models, such as autoregressive (AR) models. Based on the channel state predictions, we change the source intensity to the optimal value to achieve a higher secret key rate. We demonstrate that the improvement of the adaptation method is dependent on the prediction accuracy.
Bahrololoomi, Zahra; Fotuhi Ardakani, Faezeh; Sorouri, Milad
2015-08-01
Fluoride therapy is important for control and prevention of dental caries. Laser irradiation can increase fluoride uptake especially when combined with topical fluoride application. The objective of this study was to compare the effects of CO2 and diode lasers on enamel fluoride uptake in primary teeth. Forty human primary molars were randomly assigned to four groups (n=10). The roots were removed and the crowns were sectioned mesiodistally into buccal and lingual halves as the experimental and control groups. All samples were treated with 5% sodium fluoride (NaF) varnish. The experimental samples in the four groups were irradiated with 5 or 7W diode or 1 or 2W CO2 laser for 15 seconds and were compared with the controls in terms of fluoride uptake, which was determined using an ion selective electrode after acid dissolution of the specimens. Data were analyzed by SPSS version 16 using ANOVA treating the control measurements as covariates. The estimated amount of fluoride uptake was 59.5± 16.31 ppm, 66.5± 14.9 ppm, 78.6± 12.43 ppm and 90.4± 11.51 ppm for 5W and 7 W diode and 1W and 2 W CO2 lasers, respectively, which were significantly greater than the values in the conventional topical fluoridation group (P<0.005). There were no significant differences between 7W diode laser and 1W CO2 laser, 5W and 7W diode laser, or 1W and 2W CO2 laser in this regard. The results showed that enamel surface irradiation by CO2 and diode lasers increases the fluoride uptake.
NASA Astrophysics Data System (ADS)
Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.
2015-12-01
This paper studies the use of adaptive neuro-fuzzy inference system (ANFIS) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For ANFIS modelling, Gaussian curve membership function (gaussmf) and 200 training epochs (iteration) were found to be optimum choices for training process. The results demonstrate that ANFIS is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve combustion of the fuel and reduce the exhaust emissions significantly.
NASA Astrophysics Data System (ADS)
Lloyd, A. S.; Newcombe, M. E.; Plank, T. A.
2016-12-01
Although olivine-hosted melt inclusions (MIs) remain the gold standard for recovering volatile concentrations of primitive magmas, later-fractionating minerals may be more appropriate for assessing magma storage conditions immediately prior to eruption. We present volatile analyses of MIs entrapped in early (Mg# 81-83) olivine and later (Mg# 70-80) clinopyroxene (Cpx) from the 1977 eruption of Seguam volcano, to assess the ascent history prior to this violent strombolian eruption. The olivine-hosted MIs contain average volatile concentrations (n=16) of 3.79 wt% H2O, 167 ppm CO2, 592 ppm Cl, and 133 ppm F, consistent with an entrapment pressure of 200 to 300 MPa ( 10-13 km depth) if the CO2 contained in the bubble is taken into account (Moore et al., 2015). Cpx phenocrysts contain two distinct MI assemblages; the inner assemblage consists of randomly distributed, rounded MIs which never contain a vapor bubble. Average volatile concentrations of the inner assemblage MIs (n=11) are 0.96 wt% H2O, 98 ppm CO2, 798 ppm Cl, and 280 ppm F, consistent with an entrapment at much shallower depth, 2 km. The outer assemblage contains inclusions too small for routine volatile analysis. Inner assemblage Cpx-hosted MIs preserve average enrichments of 1.3x and 2x for Cl and F respectively, and are similarly enriched in incompatible minor and trace elements (up to a factor of 5x). Two potential scenarios can explain these observations. The enrichments may represent the entrapment of an unrelated highly-fractionated, shallow magma (which is unsupported by the whole rock record at Seguam). A second possibility is enrichment through boundary layer entrapment during a period of rapid crystal growth during ascent through the upper crust. Boundary layer entrapment during MI formation is further supported by a negative correlation between the degree of enrichment and the diffusivity of individual elements, which is consistent with growth rates 10-8 m/s. Although the olivine-hosted MIs record a volatile-rich storage region, the later-fractionating Cpx indicate a phase of rapid crystallization, likely driven by water loss from the melt at shallow depths. This work highlights the information added by analyzing multiple phases in order to reconstruct the degassing path of magma prior to eruption.
Gemmell, Isla; Dunn, Graham
2011-03-01
In a partially randomized preference trial (PRPT) patients with no treatment preference are allocated to groups at random, but those who express a preference receive the treatment of their choice. It has been suggested that the design can improve the external and internal validity of trials. We used computer simulation to illustrate the impact that an unmeasured confounder could have on the results and conclusions drawn from a PRPT. We generated 4000 observations ("patients") that reflected the distribution of the Beck Depression Index (DBI) in trials of depression. Half were randomly assigned to a randomized controlled trial (RCT) design and half were assigned to a PRPT design. In the RCT, "patients" were evenly split between treatment and control groups; whereas in the preference arm, to reflect patient choice, 87.5% of patients were allocated to the experimental treatment and 12.5% to the control. Unadjusted analyses of the PRPT data consistently overestimated the treatment effect and its standard error. This lead to Type I errors when the true treatment effect was small and Type II errors when the confounder effect was large. The PRPT design is not recommended as a method of establishing an unbiased estimate of treatment effect due to the potential influence of unmeasured confounders. Copyright © 2011 John Wiley & Sons, Ltd.
An improved procedure for the validation of satellite-based precipitation estimates
NASA Astrophysics Data System (ADS)
Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad
2015-09-01
The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.
Amusan, A A S; Idowu, A B; Arowolo, F S
2005-09-01
The ethanolic extracts of the orange peel (Citrus sinensis) and bush tea leaves (Hyptis suaveolens) were compared for their toxicity effect on the larvae of the yellow fever mosquito Aedes aegypti collected from disused tyres beside College of Natural Sciences building University of Agriculture, Abeokuta, Nigeria. Eight graded concentrations, 0.9ppm, 0.8ppm, 0.7ppm, 0.6ppm, 0.5ppm, 0.4ppm, 0.3ppm and 0.2ppm of both plant extracts were tested on the larvae. The mean lethal dose LD10, was 0.15 ppm for C. sinensis, 0.01 for H. suaveolens, while LD50 for C. sinensis was 0.4ppm, H. suaveolens 0.60ppm and LD90 for C. sinensis was 0.9ppm and H. suaveolens was 1.45ppm. LD10 for the control 0.65ppm, LD50 0.9ppm and LD90 2.0 ppm. The extract of C. sinensis peel caused higher mortality rate at concentrations 0.8ppm (95%) and 0.3ppm (90%) of the larvae while the extract of H. suaveolens caused high mortality rate on the larvae at concentrations of 0.9ppm (80%) and 0.3ppm (80%). Significant differences were observed between untreated and treated larvae (exposed to either of the extract) at the various concentrations (P< 0.05).
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
....05 ppm; grass, hay at 0.05 ppm; and grass, forage at 1.5 ppm. An enforcement method for plants has...; sorghum, stover at 0.35 ppm; grass, hay at 2.5 ppm; and grass, forage at 10 ppm. The analytical method is... on sorghum, grain at 0.2 ppm; grass, hay at 5.0 ppm; and grass, forage at 18 ppm. Since bromoxynil...
Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W
2011-01-01
Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158
Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance
NASA Astrophysics Data System (ADS)
Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman
2016-02-01
The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.
NASA Astrophysics Data System (ADS)
Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.
2015-06-01
Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).
High-precision thermal expansion measurements using small Fabry-Perot etalons
NASA Astrophysics Data System (ADS)
Davis, Mark J.; Hayden, Joseph S.; Farber, Daniel L.
2007-09-01
Coefficient of thermal expansion (CTE) measurements using small Fabry-Perot etalons were conducted on high and low thermal expansion materials differing in CTE by a factor of nearly 400. The smallest detectable change in length was ~10 -12 m. The sample consisted of a mm-sized Fabry-Perot etalon equipped with spherical mirrors; the material-under-test served as the 2.5 mm-thick spacer between the mirrors. A heterodyne optical setup was used with one laser locked to an ~780 nm hyperfine line of Rb gas and the other locked to a resonance of the sample etalon; changes in the beat frequency between the two lasers as a function of temperature directly provided a CTE value. The measurement system was tested using the high-CTE SCHOTT optical glass N-KF9 (CTE = 9.5 ppm/K at 23 °C). Measurements conducted under reproducibility conditions using five identically-prepared N-KF9 etalons demonstrate a precision of 0.1 ppm/K; absolute values (accuracy) are within 2-sigma errors with those made using mechanical dilatometers with 100-mm long sample rods. Etalon-based CTE measurements were also made on a high-CTE (~10.5 ppm/K), proprietary glass-ceramic used for high peak-pressure electrical feedthroughs and revealed statistically significant differences among parts made under what were assumed to be identical conditions. Finally, CTE measurements were made on etalons constructed from SCHOTT's ultra-low CTE Zerodur (R) glass-ceramic (CTE about -20 ppb/K at 50 °C for the material tested herein).
The Beam Dynamics and Beam Related Uncertainties in Fermilab Muon $g-2$ Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Wanwei
The anomaly of the muon magnetic moment,more » $$a_{\\mu}\\equiv (g-2)/2$$, has played an important role in constraining physics beyond the Standard Model for many years. Currently, the Standard Model prediction for $$a_{\\mu}$$ is accurate to 0.42 parts per million (ppm). The most recent muon $g-2$ experiment was done at Brookhaven National Laboratory (BNL) and determined $$a_{\\mu}$$ to 0.54 ppm, with a central value that differs from the Standard Model prediction by 3.3-3.6 standard deviations and provides a strong hint of new physics. The Fermilab Muon $g-2$ Experiment has a goal to measure $$a_{\\mu}$$ to unprecedented precision: 0.14 ppm, which could provide an unambiguous answer to the question whether there are new particles and forces that exist in nature. To achieve this goal, several items have been identified to lower the systematic uncertainties. In this work, we focus on the beam dynamics and beam associated uncertainties, which are important and must be better understood. We will discuss the electrostatic quadrupole system, particularly the hardware-related quad plate alignment and the quad extension and readout system. We will review the beam dynamics in the muon storage ring, present discussions on the beam related systematic errors, simulate the 3D electric fields of the electrostatic quadrupoles and examine the beam resonances. We will use a fast rotation analysis to study the muon radial momentum distribution, which provides the key input for evaluating the electric field correction to the measured $$a_{\\mu}$$.« less
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1983-01-01
Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.
1983-12-01
rAD-141 333 NRRROWRAND (LPC-iB) VOCODER PERFORMANCE UNDER COMBINED i/ EFFECTS OF RRNDOM.(U) ROME AIR DEVELOPMENT CENTER GRIFFISS RFB NY C P SMITH DEC...LPC-10) VOCODER In House. PERFORMANCE UNDER COMBINED EFFECTS June 82 - Sept. 83 OF RANDOM BIT ERRORS AND JET AIRCRAFT Z PERFORMING ORG REPO- NUMSEF...PAGE(Wh.n Does Eneerd) 20. (contd) Compartment, and NCA Compartment were alike in their effects on overall vocoder performance . Composite performance
Chang, Matthew S; Minaya, Maria T; Cheng, Jianfeng; Connor, Bradley A; Lewis, Suzanne K; Green, Peter H R
2011-10-01
Small intestinal bacterial overgrowth (SIBO) is one cause of a poor response to a gluten-free diet (GFD) and persistent symptoms in celiac disease. Rifaximin has been reported to improve symptoms in non-controlled trials. To determine the effect of rifaximin on gastrointestinal symptoms and lactulose-hydrogen breath tests in patients with poorly responsive celiac disease. A single-center, double-blind, randomized, controlled trial of patients with biopsy-proven celiac disease and persistent gastrointestinal symptoms despite a GFD was conducted. Patients were randomized to placebo (n = 25) or rifaximin (n = 25) 1,200 mg daily for 10 days. They completed the Gastrointestinal Symptom Rating Scale (GSRS) and underwent lactulose-hydrogen breath tests at weeks 0, 2, and 12. An abnormal breath test was defined as: (1) a rise in hydrogen of ≥20 parts per million (ppm) within 100 min, or (2) two peaks ≥20 ppm over baseline. GSRS scores were unaffected by treatment with rifaximin, regardless of baseline breath tests. In a multivariable regression model, the duration of patients' gastrointestinal symptoms significantly predicted their overall GSRS scores (estimate 0.029, p < 0.006). According to criteria 1 and 2, respectively, SIBO was present in 55 and 8% of patients at baseline, intermittently present in 28 and 20% given placebo, and 28 and 12% given rifaximin. There was no difference in the prevalence of SIBO between placebo and treatment groups at weeks 2 and 12. Rifaximin does not improve patients' reporting of gastrointestinal symptoms and hydrogen breath tests do not reliably identify who will respond to antibiotic therapy.
78 FR 70864 - Metaldehyde; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-27
... plus cob with husks removed at 0.05 ppm; grass, forage at 1.5 ppm; grass, hay at 1.8 ppm; leaf petioles...: ``Metaldehyde; Human Health Risk Assessment for Proposed Uses on Grass Grown for Seed, Leaf Petioles [Crop....10 ppm; grass, forage from 1.5 ppm to 2.0 ppm; grass, hay from 1.8 ppm to 2.0 ppm; leaf petioles...
Insight into organic reactions from the direct random phase approximation and its corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruzsinszky, Adrienn; Zhang, Igor Ying; Scheffler, Matthias
2015-10-14
The performance of the random phase approximation (RPA) and beyond-RPA approximations for the treatment of electron correlation is benchmarked on three different molecular test sets. The test sets are chosen to represent three typical sources of error which can contribute to the failure of most density functional approximations in chemical reactions. The first test set (atomization and n-homodesmotic reactions) offers a gradually increasing balance of error from the chemical environment. The second test set (Diels-Alder reaction cycloaddition = DARC) reflects more the effect of weak dispersion interactions in chemical reactions. Finally, the third test set (self-interaction error 11 = SIE11)more » represents reactions which are exposed to noticeable self-interaction errors. This work seeks to answer whether any one of the many-body approximations considered here successfully addresses all these challenges.« less
Quantifying Adventitious Error in a Covariance Structure as a Random Effect
Wu, Hao; Browne, Michael W.
2017-01-01
We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal
2016-05-15
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds intomore » the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.« less
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
NASA Astrophysics Data System (ADS)
Krisciunas, Kevin
2007-12-01
A gnomon, or vertical pointed stick, can be used to determine the north-south direction at a site, as well as one's latitude. If one has accurate time and knows one's time zone, it is also possible to determine one's longitude. From observations on the first day of winter and the first day of summer one can determine the obliquity of the ecliptic. Since we can obtain accurate geographical coordinates from Google Earth or a GPS device, analysis of set of shadow length measurements can be used by students to learn about astronomical coordinate systems, time systems, systematic errors, and random errors. Systematic latitude errors of student datasets are typically 30 nautical miles (0.5 degree) or more, but with care one can achieve systematic and random errors less than 8 nautical miles. One of the advantages of this experiment is that it can be carried out during the day. Also, it is possible to determine if a student has made up his data.
Biometrics encryption combining palmprint with two-layer error correction codes
NASA Astrophysics Data System (ADS)
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Development of a Self-Calibrated MEMS Gyrocompass for North-Finding and Tracking
NASA Astrophysics Data System (ADS)
Prikhodko, Igor P.
This Ph.D. dissertation presents development of a microelectromechanical (MEMS) gyrocompass for north-finding and north-tracking applications. The central part of this work enabling these applications is control and self-calibration architectures for drift mitigation over thermal environments, validated using a MEMS quadruple mass gyroscope. The thesis contributions are the following: • Adapted and implemented bias and scale-factor drifts compensation algorithm relying on temperature self-sensing for MEMS gyroscopes with high quality factors. The real-time self-compensation reduced a total bias error to 2 °/hr and a scale-factor error to 500 ppm over temperature range of 25 °C to 55 °C (on par with the state-of-the-art). • Adapted and implemented a scale-factor self-calibration algorithm previously employed for macroscale hemispherical resonator gyroscope to MEMS Coriolis vibratory gyroscopes. An accuracy of 100 ppm was demonstrated by simultaneously measuring the true and estimated scale-factors over temperature variations (on par with the state-of-the art). • Demonstrated north-finding accuracy satisfying a typical mission requirement of 4 meter target location error at 1 kilometer stand-off distance (on par with a GPS accuracy). Analyzed north-finding mechanizations trade-offs for MEMS vibratory gyroscopes and demonstrated measurements of the Earth's rotation (15 °/hr). • Demonstrated, for the first time, an angle measuring MEMS gyroscope operation for north-tracking applications in a +/-500 °/s rate range and 100 Hz bandwidth, eliminating both bandwidth and range constraints of conventional open-loop Coriolis vibratory gyroscopes. • Investigated hypothesis that surface-tension driven glass-blowing microfabrication can create highly spherical shells for 3-D MEMS. Without any trimming or tuning of the natural frequencies, a 1 MHz glass-blown 3-D microshell resonator demonstrated a 0.63 % frequency mismatch between two degenerate 4-node wineglass modes. • Multi-axis rotation detection for nuclear magnetic resonance (NMR) gyroscope was proposed and developed. The analysis of cross-axis sensitivities for NMR gyroscope was performed. The framework for the analysis of NMR gyroscope dynamics for both open loop and closed loop modes of operation was developed.
Hybrid computer technique yields random signal probability distributions
NASA Technical Reports Server (NTRS)
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
Pérula de Torres, Luis Angel; Pulido Ortega, Laura; Pérula de Torres, Carlos; González Lama, Jesús; Olaya Caro, Inmaculada; Ruiz Moral, Roger
2014-10-21
To evaluate the effectiveness of an intervention based on motivational interviewing to reduce medication errors in chronic patients over 65 with polypharmacy. Cluster randomized trial that included doctors and nurses of 16 Primary Care centers and chronic patients with polypharmacy over 65 years. The professionals were assigned to the experimental or the control group using stratified randomization. Interventions consisted of training of professionals and revision of patient treatments, application of motivational interviewing in the experimental group and also the usual approach in the control group. The primary endpoint (medication error) was analyzed at individual level, and was estimated with the absolute risk reduction (ARR), relative risk reduction (RRR), number of subjects to treat (NNT) and by multiple logistic regression analysis. Thirty-two professionals were randomized (19 doctors and 13 nurses), 27 of them recruited 154 patients consecutively (13 professionals in the experimental group recruited 70 patients and 14 professionals recruited 84 patients in the control group) and completed 6 months of follow-up. The mean age of patients was 76 years (68.8% women). A decrease in the average of medication errors was observed along the period. The reduction was greater in the experimental than in the control group (F=5.109, P=.035). RRA 29% (95% confidence interval [95% CI] 15.0-43.0%), RRR 0.59 (95% CI:0.31-0.76), and NNT 3.5 (95% CI 2.3-6.8). Motivational interviewing is more efficient than the usual approach to reduce medication errors in patients over 65 with polypharmacy. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Dörnberger, V; Dörnberger, G
1987-01-01
On 99 testes of corpses (death had occurred between 26 und 86 years) comparative volumetry was done. In the left surrounding capsules (without scrotal skin and tunica dartos) the testes were measured via real time sonography in a waterbath (7.5 MHz linear-scan), afterwards length, breadth and height were measured by a sliding calibre, the largest diameter (the length) of the testis was determined by Schirren's circle and finally the size of the testis was measured via Prader's orchidometer. After all the testes were surgically exposed, their volume (by litres) was determined according to Archimedes' principle. As for the Archimedes' principle a random mean error of 7% must be accepted, sonographic determination of the volume showed a random mean error of 15%. Whereas the accuracy of measurement increases with increasing volumes, both methods should be used with caution if the volumes are below 4 ml, since the possibilities of error are rather great. According to Prader's orchidometer the measured volumes on average were higher (+ 27%) with a random mean error of 19.5%. With Schirren's circle the obtained mean value was even higher (+ 52%) in comparison to the "real" volume by Archimedes' principle with a random mean error of 19%. The measurements of the testes in the left capsules by sliding calibre can be optimized, if one applies a correcting factor f (sliding calibre) = 0.39 for calculation of the testis volume corresponding to an ellipsoid. Here you will get the same mean value as in Archimedes' principle with a standard mean error of only 9%. If one applies the correction factor of real time sonography of testis f (sono) = 0.65 the mean value of sliding calibre measurements would be 68.8% too high with a standard mean error of 20.3%. For measurements via sliding calibre the calculation of the testis volume corresponding to an ellipsoid one should apply the smaller factor f (sliding calibre) = 0.39, because in this way the left capsules of testis and the epididymis are considered.
77 FR 14291 - Penthiopyrad; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-09
...-methyl-3-trifluoromethyl-1H-pyrazole-4-carboxamide) in animal commodities hog, meat at 0.01 ppm; hog, fat...; cattle, meat at 0.05 ppm; cattle, fat at 0.05 ppm; cattle, liver at 0.2 ppm; cattle, kidney at 0.1 ppm; cattle, meat byproducts at 0.2 ppm; sheep, meat at 0.01 ppm; sheep, fat at 0.02 ppm; sheep, liver at 0.05...
[Limonoids in Phellodendron amurense (Kihada)].
Miyake, M; Inaba, N; Ayano, S; Ozaki, Y; Maeda, H; Ifuku, Y; Hasegawa, S
1992-05-01
Limonoids and their glucosides in the seeds and barks of Phellondendron amurense (Kihada) were analyzed. The seeds contained limonin (1950 ppm), obakunone (20 ppm), limonin 17-beta-D-glucopyranoside (820 ppm) and obakunone 17-beta-D-glucopyranoside (1360 ppm). The barks contained limonin (6760 ppm), obakunone (1240 ppm) and nomilin (270 ppm).
NASA Astrophysics Data System (ADS)
Jung, Jae Hong; Jung, Joo-Young; Bae, Sun Hyun; Moon, Seong Kwon; Cho, Kwang Hwan
2016-10-01
The purpose of this study was to compare patient setup deviations for different image-guided protocols (weekly vs. biweekly) that are used in TomoDirect three-dimensional conformal radiotherapy (TD-3DCRT) for whole-breast radiation therapy (WBRT). A total of 138 defined megavoltage computed tomography (MVCT) image sets from 46 breast cancer cases were divided into two groups based on the imaging acquisition times: weekly or biweekly. The mean error, three-dimensional setup displacement error (3D-error), systematic error (Σ), and random error (σ) were calculated for each group. The 3D-errors were 4.29 ± 1.11 mm and 5.02 ± 1.85 mm for the weekly and biweekly groups, respectively; the biweekly error was 14.6% higher than the weekly error. The systematic errors in the roll angle and the x, y, and z directions were 0.48°, 1.72 mm, 2.18 mm, and 1.85 mm for the weekly protocol and 0.21°, 1.24 mm, 1.39 mm, and 1.85 mm for the biweekly protocol. Random errors in the roll angle and the x, y, and z directions were 25.7%, 40.6%, 40.0%, and 40.8% higher in the biweekly group than in the weekly group. For the x, y, and z directions, the distributions of the treatment frequency at less than 5 mm were 98.6%, 91.3%, and 94.2% in the weekly group and 94.2%, 89.9%, and 82.6% in the biweekly group. Moreover, the roll angles with 0 - 1° were 79.7% and 89.9% in the weekly and the biweekly groups, respectively. Overall, the evaluation of setup deviations for the two protocols revealed no significant differences (p > 0.05). Reducing the frequency of MVCT imaging could have promising effects on imaging doses and machine times during treatment. However, the biweekly protocol was associated with increased random setup deviations in the treatment. We have demonstrated a biweekly protocol of TD-3DCRT for WBRT, and we anticipate that our method may provide an alternative approach for considering the uncertainties in the patient setup.
Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B
2013-09-01
In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2002-01-01
Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.
Olfactory recognition memory is disrupted in young mice with chronic low-level lead exposure.
Flores-Montoya, Mayra Gisel; Alvarez, Juan Manuel; Sobin, Christina
2015-07-02
Chronic developmental lead exposure yielding very low blood lead burden is an unresolved child public health problem. Few studies have attempted to model neurobehavioral changes in young animals following very low level exposure, and studies are needed to identify tests that are sensitive to the neurobehavioral changes that may occur. Mechanisms of action are not yet known however results have suggested that hippocampus/dentate gyrus may be uniquely vulnerable to early chronic low-level lead exposure. This study examined the sensitivity of a novel odor recognition task to differences in pre-adolescent C57BL/6J mice chronically exposed from birth to PND 28, to 0 ppm (control), 30 ppm (low-dose), or 330 ppm (higher-dose) lead acetate (N=33). Blood lead levels (BLLs) determined by ICP-MS ranged from 0.02 to 20.31 μg/dL. Generalized linear mixed model analyses with litter as a random effect showed a significant interaction of BLL×sex. As BLLs increased olfactory recognition memory decreased in males. Among females, non-linear effects were observed at lower but not higher levels of lead exposure. The novel odor detection task is sensitive to effects associated with early chronic low-level lead exposure in young C57BL/6J mice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, P.E.; Utell, M.J.; Bauer, M.A.
1992-02-01
Symptoms and changes in pulmonary function of subjects with chronic obstructive pulmonary disease (COPD) and elderly normal subjects, induced by a 4-h exposure to 0.3 ppm NO2, were investigated using a double-blind, crossover design with purified air. The 5-day experimental protocol required approximately 2 wk with at least a 5-day separation between randomized 4-h exposures to either NO2 or air which included several periods of exercise. Over a 2-yr period, COPD subjects, all with a history of smoking, consisting of 13 men and 7 women (mean age of 60.0 yr) and 20 elderly normal subjects of comparable age and sexmore » were evaluated. During intermittent light exercise, COPD subjects demonstrated progressive decrements in FVC and FEV1 compared with baseline with 0.3 ppm NO2, but not with air. Differences in percent changes from baseline data (air-NO2) showed an equivocal reduction in FVC by repeated measures of analysis of variance and cross-over t tests (p less than 0.10). Subgroup analyses suggested that responsiveness to NO2 decreased with severity of COPD; in elderly normal subjects, NO2-induced reduction in FEV1 was greater among smokers than never-smokers. A comparison of COPD and elderly normal subjects also revealed distinctions in NO2-induced responsiveness.« less
Pourhossein, Zohreh; Qotbi, Ali Ahmad Alaw; Seidavi, Alireza; Laudadio, Vito; Centoducati, Gerardo; Tufarelli, Vincenzo
2015-01-01
This experiment was conducted to evaluate the effects of different levels of sweet orange (Citrus sinensis) peel extract (SOPE) on humoral immune system responses in broiler chickens. Three hundred 1-day broilers (Ross-308) were randomly allocated to treatments varying in supplemental SOPE added in the drinking water. The experimental groups consisted of three treatments fed for 42 days as follows: a control treatment without feed extract, a treatment containing 1000 ppm of SOPE and a treatment containing 1250 ppm of SOPE. All treatments were isocaloric and isonitrogenous. Broilers were vaccinated with Newcastle disease virus (NDV), avian influenza (AI), infectious bursal disease (IBD) and infectious bronchitis virus (IBV) vaccines. Antibody titer response to sheep red blood cells (SRBC) was higher in the group fed 1250 ppm of SOPE (P < 0.05) as well as for immunoglobulin G (IgG) and IgM. Similarly, antibody titer responses to all vaccines were constantly elevated (P < 0.05) by SOPE enrichment in a dose-dependent manner. Relative weights of spleen and bursa of Fabricius were unaffected by treatments. Dietary SOPE supplementation may improve the immune response and diseases resistance, indicating that it can constitute a useful additive in broiler feeding. Thus, supplying SOPE in rations may help to improve relative immune response in broiler chickens. © 2014 Japanese Society of Animal Science.
Binns, Helen J; Gray, Kimberly A; Chen, Tianyue; Finster, Mary E; Peneff, Nicholas; Schaefer, Peter; Ovsey, Victor; Fernandes, Joyce; Brown, Mavis; Dunlap, Barbara
2004-10-01
This study was designed primarily to evaluate the effectiveness of landscape coverings to reduce the potential for exposure to lead-contaminated soil in an urban neighborhood. Residential properties were randomized in to three groups: application of ground coverings/barriers plus placement of a raised garden bed (RB), application of ground coverings/barriers only (no raised bed, NRB), and control. Outcomes evaluated soil lead concentration (employing a weighting method to assess acute hazard soil lead [areas not fully covered] and potential hazard soil lead [all soil surfaces regardless of covering status]), density of landscape coverings (6 = heavy, > 90% covered; 1 = bare, < 10% covered), lead tracked onto carpeted entryway floor mats, and entryway floor dust lead loadings. Over 1 year, the intervention groups had significantly reduced acute hazard soil lead concentration (median change: RB, -478 ppm; NRB, -698 ppm; control, +52 ppm; Kruskal-Wallis, P = 0.02), enhanced landscape coverings (mean change in score: RB, +0.6; NRB, +1.5; control, -0.6; ANOVA, P < 0.001), and a 50% decrease in lead tracked onto the floor mats. The potential hazard soil lead concentration and the entryway floor dust lead loading did not change significantly. Techniques evaluated by this study are feasible for use by property owners but will require continued maintenance. The long-term sustainability of the method needs further examination.
Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N
2018-05-01
Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.
NASA Astrophysics Data System (ADS)
Zhang, Y. K.; Liang, X.
2014-12-01
Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.
Tarrab, Leticia; Garcia, Carlos M.; Cantero, Mariano I.; Oberg, Kevin
2012-01-01
This work presents a systematic analysis quantifying the role of the presence of turbulence fluctuations on uncertainties (random errors) of acoustic Doppler current profiler (ADCP) discharge measurements from moving platforms. Data sets of three-dimensional flow velocities with high temporal and spatial resolution were generated from direct numerical simulation (DNS) of turbulent open channel flow. Dimensionless functions relating parameters quantifying the uncertainty in discharge measurements due to flow turbulence (relative variance and relative maximum random error) to sampling configuration were developed from the DNS simulations and then validated with field-scale discharge measurements. The validated functions were used to evaluate the role of the presence of flow turbulence fluctuations on uncertainties in ADCP discharge measurements. The results of this work indicate that random errors due to the flow turbulence are significant when: (a) a low number of transects is used for a discharge measurement, and (b) measurements are made in shallow rivers using high boat velocity (short time for the boat to cross a flow turbulence structure).
Enhanced growth, yield and physiological characteristics of rice under elevated carbon dioxide
NASA Astrophysics Data System (ADS)
Abzar, A.; Ahmad, Wan Juliana Wan; Said, Mohd Nizam Mohd; Doni, Febri; Zaidan, Mohd Waznul Adly Mohd; Fathurahman, Zain, Che Radziah Che Mohd
2018-04-01
Carbon dioxide (CO2) is rapidly increasing in the atmosphere. It is an essential element for photosynthesis which attracts attention among scientists on how plants will perform in the rising CO2 level. Rice as one of the most important staple food in the world has been studied on the growth responses under elevated CO2. The present research was carried out to determine the growth and physiology of rice in elevated CO2 condition. This research was carried out using complete randomized design with elevated (800 ppm) and ambient CO2. Results showed that growth parameters such as plant height, tillers and number of leaves per plant were increased by elevated CO2. The positive changes in plant physiology when exposed to high CO2 concentration includes significant change (p<0.05) in yield parameters such as panicle number, grain number per panicle, biomass and 1000 grain weight under the elevated CO2 of 800 ppm.
New Discoveries of Trr and Radiopactive Elements of Phosphogypsum from Romania
NASA Astrophysics Data System (ADS)
Maruta Iancu, Aurora; Georgeta Dumitras, Delia; Marincea, Stefan
2014-05-01
Phosphogypsum is a technogenic product remaining after the extraction of phosphoric acid from raw phosphate, mainly apatite. Some radioactive elements presented in the phosphate original rock, consisting in apatit, are Ra-226, th-232, U-238, Pb-210, Po-240, K-210, that can be also found in the phosphogypsum. Determination of elements has been carried out on phosphogypsum samples from Turnu Magurele (TR), Valea Calugareasca (VC), Navodari (N) and Bacau (B). The most important minor elements of phosphogypsum are Th and U. The radioactivity of isotopes from Bacau samples of phosphogypsum is: U-238 exceeding (ppm) - 40,50; 31,96; 17,49; 30,00; 31,00 and Th-232 (ppm) - 8,07; 6,07; 6,41; 7,80; 6,41. The radiometric analyzes confirmed that Bacau county phosphogypsum have higher concentrations of U, while the content in Th is lower. The radioactivity of isotopes from samples of Navodari phosphogypsum is U-238 (ppm) - 37,00; 40,97; 10,84; 25,72 and Th-232 (ppm) - 6,82; 7,04; 6,19; 7,55. The radioactivity of isotopes from Turnu Magurele phosphogypsum samples is: U-232 (ppm) - 1,51; 21,92; 28,71; 6,92, 10,79, 11,00, and Th 232 (ppm) - 3,87; 7,29; 10,65; 6,22; 6,77; 5,45. The radiaoactivity of isotopes from Valea Calugareasca samples of phosphogypsum is: U-238 (ppm) - 17,60; 22,35; 17,93; 18,78 and Th-232 (ppm) - 5,98; 7,12; 7,85; 8,07. As in the case of the phosphogypsum analyzed in Bacau, as well as in the zones TM, VC and N, the radiometric analyzes results indicate a high content of U-232 and lower in Th-232. In conclusion, based on the analyzes carried out on samples of phosphogypsum from the four areas, and the higher U and lower Th contents, it follows that we are dealing with phosphogypsum thet results from a sedimentary type rock. Inductively coupled plasma - atomic emission spectrometry (ICP-AES) analyses performed on selected samples of phosphogypsum from the four deposits showed that the contents in the main REE (cerium, erbium, neodymium, thorium, ytterbium) are specific for the phosphogypsum issued from the processing of sedimentary raw phosphates. The results are: Turnu Magurele - Ce (ppm) - (29,1-663,1); Er (ppm) - (0,9-11,7); La (ppm) - (22,7-469,0); Nd (ppm) - (21,1-260-5); Th (ppm) - (0,3-20,8); Yb (ppm) - (1,1-6,8). Valea Calugareasca - Ce (ppm) - (30,2-454,2; Er (ppm) - (0,8-7,3); La (ppm) - (35,7-322,5); Nd (ppm) - (22,3-188,2); Th (ppm) - (0,0-12,8); Yb (ppm) - (1,6-5,0). Navodari - Ce (ppm) - (3,9-165,0); Er (ppm) - (1,8-7,7); La (ppm) - (14,5-135,6); Nd (ppm) - (3,8-90,6); Th (ppm) - (0,8-6,5); Yb (ppm) - (1,8-6,1). Bacau - Ce (ppm) - (19,3-174,8); Er (ppm) - 13,1-18,8); La (ppm) - (36,2-134,2); Nd (ppm) - (24,5-104,5); Th (ppm) (1,7-5,2); Yb (ppm) - (1,9-6,6).
The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo
2018-05-01
The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.
Tridandapani, Srini; Ramamurthy, Senthil; Provenzale, James; Obuchowski, Nancy A; Evanoff, Michael G; Bhatti, Pamela
2014-08-01
To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
A spatial error model with continuous random effects and an application to growth convergence
NASA Astrophysics Data System (ADS)
Laurini, Márcio Poletti
2017-10-01
We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.
Manure sampling procedures and nutrient estimation by the hydrometer method for gestation pigs.
Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian
2004-05-01
Three manure agitation procedures were examined in this study (vertical mixing, horizontal mixing, and no mixing) to determine the efficacy of producing a representative manure sample. The total solids content for manure from gestation pigs was found to be well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.988 and 0.994, respectively. Linear correlations were observed between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.991 and 0.987, respectively). Therefore, it may be inferred that the nutrients in pig manure can be estimated with reasonable accuracy by measuring the liquid manure specific gravity. A rapid testing method for manure nutrient contents (TN and TP) using a soil hydrometer was also evaluated. The results showed that the estimating error increased from +/-10% to +/-30% with the decrease in TN (from 1000 to 100 ppm) and TP (from 700 to 50 ppm) concentrations in the manure. Data also showed that the hydrometer readings had to be taken within 10 s after mixing to avoid reading drift in specific gravity due to the settling of manure solids.
Larsson, William; Jalbert, Jocelyn; Gilbert, Roland; Cedergren, Anders
2003-03-15
The efficiency of azeotropic distillation and oven evaporation techniques for trace determination of water in oils has recently been questioned by the National Institute of Standards and Technology (NIST), on the basis of measurements of the residual water found after the extraction step. The results were obtained by volumetric Karl Fischer (KF) titration in a medium containing a large excess of chloroform (> or = 65%), a proposed prerequisite to ensure complete release of water from the oil matrix. In this work, the extent of this residual water was studied by means of a direct zero-current potentiometric technique using a KF medium containing more than 80% chloroform, which is well above the concentration recommended by NIST. A procedure is described that makes it possible to correct the results for dilution errors as well as for chemical interference effects caused by the oil matrix. The corrected values were found to be in the range of 0.6-1.5 ppm, which should be compared with the 12-34 ppm (uncorrected values) reported by NIST for the same oils. From this, it is concluded that the volumetric KF method used by NIST gives results that are much too high.
The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs
ERIC Educational Resources Information Center
Gorard, Stephen
2013-01-01
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Analysis of Errors Committed by Physics Students in Secondary Schools in Ilorin Metropolis, Nigeria
ERIC Educational Resources Information Center
Omosewo, Esther Ore; Akanbi, Abdulrasaq Oladimeji
2013-01-01
The study attempt to find out the types of error committed and influence of gender on the type of error committed by senior secondary school physics students in metropolis. Six (6) schools were purposively chosen for the study. One hundred and fifty five students' scripts were randomly sampled for the study. Joint Mock physics essay questions…
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos
2016-11-01
We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
2015-03-24
Institute, Dayton, OH 45469 4 California State University, Long Beach, CA 90840 5 University of Michigan, Ann Arbor, MI 48109 2 Acknowledgements Ms. Yvonne...ppm) C2 Thiophenes 0.3 C3-C4 Thiophenes 1.4 C5 Thiophenes 3.7 C6 Thiophenes 3.5 C7 Thiophenes 4.1 C8 -C9 Thiophenes 2.9 C10 Thiophenes 0.6 C11...Thiophenes 6.3 C6 Thiophenes 6.1 C7 Thiophenes 5.8 C8 -C9 Thiophenes 4.9 C10 Thiophenes 1.3 C11 Thiophenes 0.9 C12+ Thiophenes 2.0 Standard Grade RP-1 (Errors
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Chuchird, Niti; Rorkwiree, Phitsanu; Rairat, Tirawat
2015-01-01
A 90-day feeding trial was conducted to evaluate the effects of formic acid (FA) and astaxanthin (AX) on growth, survival, immune parameters, and tolerance to Vibrio infection in Pacific white shrimp. The study was divided into two experiments. In experiment 1, postlarvae-12 were randomly distributed into six groups and then fed four times daily with six experimental diets contained 0.3 % FA, 0.6 % FA, 50 ppm AX, 0.3 % FA + 50 ppm AX, 0.6 % FA + 50 ppm AX, or none of these supplements (control diet). After 60 days of the feeding trials, the body weight of all treatment groups was not significantly different from the control group, although shrimp fed formic acid had significantly lower body weight than shrimp fed 50 ppm AX. However, the 0.6 % FA + 50 ppm AX group had a significantly higher survival rate (82.33 ± 8.32 %) than the control group (64.33 ± 10.12 %). In experiment 2, Vibrio parahaemolyticus was added to each tank to obtain a final concentration of 10(4) colony-forming units/mL. Each treatment group received the aforementioned diets for another 30 days. At the end of this experiment, there was no difference in the weight gain among all experimental groups. However, the survival rate of shrimps whose diet included FA, AX, and their combination (in the range of 45.83-67.50 %) was significantly higher than the control group (20.00 ± 17.32 %). FA-fed shrimps also had significantly lower total intestinal bacteria and Vibrio spp. counts, while immune parameters [total hemocyte count (THC), phagocytosis activity, phenoloxidase (PO) activity, and superoxide dismutase (SOD) activity] of AX-fed groups were significantly improved compared with the other groups. In conclusion, FA, AX, and their combination are useful in shrimp aquaculture.
Mourgeon, Eric; Puybasset, Louis; Law-Koune, Jean-Dominique; Lu, Qin; Abdennour, Lamine; Gallart, Lluis; Malassine, Patrick; Rao, GS Umamaheswara; Cluzel, Philippe; Bennani, Abdelhai; Coriat, Pierre; Rouby, Jean-Jacques
1997-01-01
Background: The aim of this prospective study was to assess whether the presence of septic shock could influence the dose response to inhaled nitric oxide (NO) in NO-responding patients with adult respiratory distress syndrome (ARDS). Results: Eight patients with ARDS and without septic shock (PaO2 = 95 ± 16 mmHg, PEEP = 0, FiO2 = 1.0), and eight patients with ARDS and septic shock (PaO2 = 88 ± 11 mmHg, PEEP = 0, FiO2 = 1.0) receiving exclusively norepinephrine were studied. All responded to 15 ppm inhaled NO with an increase in PaO2 of at least 40 mmHg, at FiO2 1.0 and PEEP 10 cmH2O. Inspiratory intratracheal NO concentrations were recorded continuously using a fast response time chemiluminescence apparatus. Seven inspiratory NO concentrations were randomly administered: 0.15, 0.45, 1.5, 4.5, 15, 45 and 150 ppm. In both groups, NO induced a dose-dependent decrease in mean pulmonary artery pressure (MPAP), pulmonary vascular resistance index (PVRI), and venous admixture (QVA/QT), and a dose-dependent increase in PaO2/FiO2 (P ≤ 0.012). Dose-response of MPAP and PVRI were similar in both groups with a plateau effect at 4.5 ppm. Dose-response of PaO2/FiO2 was influenced by the presence of septic shock. No plateau effect was observed in patients with septic shock and PaO2/FiO2 increased by 173 ± 37% at 150 ppm. In patients without septic shock, an 82 ± 26% increase in PaO2/FiO2 was observed with a plateau effect obtained at 15 ppm. In both groups, dose-response curves demonstrated a marked interindividual variability and in five patients pulmonary vascular effect and improvement in arterial oxygenation were dissociated. Conclusion: For similar NOinduced decreases in MPAP and PVRI in both groups, the increase in arterial oxygenation was more marked in patients with septic shock. PMID:11056694
Fundamental constants and high-resolution spectroscopy
NASA Astrophysics Data System (ADS)
Bonifacio, P.; Rahmani, H.; Whitmore, J. B.; Wendt, M.; Centurion, M.; Molaro, P.; Srianand, R.; Murphy, M. T.; Petitjean, P.; Agafonova, I. I.; D'Odorico, S.; Evans, T. M.; Levshakov, S. A.; Lopez, S.; Martins, C. J. A. P.; Reimers, D.; Vladilo, G.
2014-01-01
Absorption-line systems detected in high resolution quasar spectra can be used to compare the value of dimensionless fundamental constants such as the fine-structure constant, α, and the proton-to-electron mass ratio, μ = m_p/m_e, as measured in remote regions of the Universe to their value today on Earth. In recent years, some evidence has emerged of small temporal and also spatial variations in α on cosmological scales which may reach a fractional level of ≈ 10 ppm (parts per million). We are conducting a Large Programme of observations with the Very Large Telescope's Ultraviolet and Visual Echelle Spectrograph (UVES), and are obtaining high-resolution ({R ≈ 60 000}) and high signal-to-noise ratio (S/N ≈ 100) spectra calibrated specifically to study the variations of the fundamental constants. We here provide a general overview of the Large Programme and report on the first results for these two constants, discussed in detail in Molaro et al. (2013) and Rahmani et al. (2013). A stringent bound for Δα/α is obtained for the absorber at z_abs = 1.6919 towards HE 2217-2818. The absorption profile is complex with several very narrow features, and is modeled with 32 velocity components. The relative variation in α in this system is +1.3± 2.4_stat ± 1.0_sys ppm if Al II λ 1670 Å and three Fe II transitions are used, and +1.1 ± 2.6_stat ppm in a slightly different analysis with only Fe II transitions used. This is one of the tightest bounds on α-variation from an individual absorber and reveals no evidence for variation in α at the 3-ppm precision level (1σ confidence). The expectation at this sky position of the recently-reported dipolar variation of α is (3.2-5.4)±1.7 ppm depending on dipole model used and this constraint of Δα/α at face value is not supporting this expectation but not inconsistent with it at the 3σ level. For the proton-to-electron mass ratio the analysis of the H_2 absorption lines of the z_abs ≈ 2.4018 damped Lyα system towards HE 0027-1836 provides Δμ/μ = (-7.6 ± 8.1_stat ± 6.3_sys) ppm which is also consistent with a null variation. The cross-correlation analysis between individual exposures taken over three years and comparison with almost simultaneous asteroid observations revealed the presence of a possible wavelength dependent velocity drift as well as of inter-order distortions which probably dominate the systematic error and are a significant obstacle to achieve more accurate measurements. Based on observations obtained with UVES at the the 8.2 m Kueyen ESO telescope programme L185.A-0745.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Soil contamination by heavy metals in the city: a case study of Petach-Tikva, Israel
NASA Astrophysics Data System (ADS)
Sarah, Pariente; Zhevelev, Helena; Ido-Lichtman, Orna
2017-04-01
Heavy metals are among the most important pollutants which are affected by human activities. These pollutants impact both the natural and urban ecosystems. In the latter they are associated with the human health of the residents. The general aim of the study is to investigate the spatial variability of soil heavy metals in the city of Petach-Tikva. We asked if and to what extent the urban structure determines the spatial pattern of soil contamination. Urban structure in this study refers to the morphology of neighborhoods (density and height of buildings), the industrial area location and the roads system. It includes three main and industrial areas in the margins of the city. The city is also subjected to heavy traffic and contains different types of neighborhood morphology. To promote the above aim a preliminary study was conducted in 2016. Soil sampling was carried out along a strip, running from the Northwest industrial region of the city to the residential region in the center. Soil samples were randomly taken, from 0-5 cm, from industrial, near high traffic roads and between buildings areas. Each was analyzed for three heavy metals (Pb, Zn, Cu) commonly associated with industry and traffic emissions. Primary results show that for all the city studied areas the range values of Cu Zn and Pb concentrations were 1800, 1270 and 150 ppm, respectively, meaning high spatial variability of the heavy metals. In the soil of the industrial area the averages and the maximum values of Pb, Zn, and Cu concentrations were 76, 353 and 500 ppm and 153, 1286 and 1847 ppm, respectively. In the soil between buildings the averages were 20, 78 and 13 ppm and the maximum values reached 38, 165 and 37 ppm for Pb, Zn, and Cu, respectively. In the soil near roads the averages were 39, 120 and 214 ppm, and the maximum values were 153, 477 and 74 ppm for Pb, Zn, and Cu, respectively. These results indicate that the city industry has the greatest effect on soil pollution. Within the city neighborhoods the traffic effect on soil contamination was more pronounced in areas close to the roads with respect to the areas far from them. Some of soil sampling points showed heavy metals contents, which were higher than the values permitted by the guidelines of the Israeli EPM. These hot spots can be attributed to combined contamination factors or to high intensity of a single human activity or to low soil sheltering. The preliminary study leads to the conclusion that under the industry and traffic that prevailed in the city soil contamination increases in the vicinity of the residential area. The effect of neighborhood morphology is still under analysis.
Stochastic characterization of phase detection algorithms in phase-shifting interferometry
Munteanu, Florin
2016-11-01
Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
75 FR 29441 - Novaluron; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-26
... amend existing tolerances of novaluron in or on poultry, fat from 0.40 ppm to 7.0 ppm; poultry, meat from 0.03 ppm to 0.40 ppm; poultry, meat byproducts from 0.04 ppm to 0.80 ppm; hog, fat from 0.05 ppm..., Environmental Science Center, 701 Mapes Rd., Ft. Meade, MD 20755-5350; telephone number: (410) 305-2905; e-mail...
Disclosure of Medical Errors: What Factors Influence How Patients Respond?
Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H
2006-01-01
BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
Umano, K; Hagi, Y; Nakahara, K; Shoji, A; Shibamoto, T
2000-08-01
Extracts from leaves of Japanese mugwort (Artemisia princeps Pamp.) were obtained using two methods: steam distillation under reduced pressure followed by dichloromethane extraction (DRP) and simultaneous purging and extraction (SPSE). A total of 192 volatile chemicals were identified in the extracts obtained by both methods using gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS). They included 47 monoterpenoids (oxygenated monoterpenes), 26 aromatic compounds, 19 aliphatic esters, 18 aliphatic alcohols, 17 monoterpenes (hydrocarbon monoterpenes), 17 sesquiterpenes (hydrocarbon sesquiterpenes), 13 sesquiterpenoids (oxygenated sesquiterpenes), 12 aliphatic aldehydes, 8 aliphatic hydrocarbons, 7 aliphatic ketones, and 9 miscellaneous compounds. The major volatile constituents of the extract by DRP were borneol (10.27 ppm), alpha-thujone (3.49 ppm), artemisia alcohol (2.17 ppm), verbenone (1.85 ppm), yomogi alcohol (1.50 ppm), and germacren-4-ol (1.43 ppm). The major volatile constituents of the extract by SPSE were 1,8-cineole (8.12 ppm), artemisia acetate (4.22 ppm), alpha-thujone (3.20 ppm), beta-caryophyllene (2.39 ppm), bornyl acetate (2.05 ppm), borneol (1.80 ppm), and trans-beta-farnesene (1. 78 ppm).
Schemehorn, B R; DiMarino, J C; Movahed, N
2014-01-01
The objective of this in vitro study was to compare the fluoride uptake into incipient enamel lesions of a novel 970 ppm F- ion SnF2 over-the-counter (OTC) gel (Enamelon Preventive Treatment Gel) and a novel 1150 ppm F- ion OTC toothpaste (Enamelon), each delivering amorphous calcium phosphate (ACP), to the uptake from two different prescription strength, 5000 ppm F- ion dentifrices containing tri-calcium phosphate (TCP) and a prescription 900 ppm F- ion paste containing casein phosphopeptide-amorphous calcium phosphate (CPP-ACP). The test procedure followed method #40 in the US-FDA Anticaries Drug Products for OTC Human Use, Final Monograph testing procedures. Eight sets of twelve incisor enamel cores were mounted in Plexiglas rods and the exposed surfaces were polished. The indigenous fluoride levels of each specimen were determined prior to treatment. The treatments were performed using slurries of a negative control (water) and the following products applied to a set of sound enamel cores: 5000 ppm F- ion, sodium fluoride (NaF) prescription (Rx) dentifrice "A" containing TCP; 5000 ppm F- ion, NaF Rx dentifrice "B" containing TCP; 900 ppm F- ion, NaF Rx paste with CPP-ACP; 1150 ppm F- ion, NaF OTC toothpaste; 1150 ppm F- ion, stannous fluoride (SnF2) OTC toothpaste delivering ACP (Enamelon); 1100 ppm F- ion, SnF2 OTC toothpaste; and 970 ppm F- ion, SnF2 OTC gel delivering ACP (Enamelon Preventive Treatment Gel). The twelve specimens of each group were immersed into 25 ml of their assigned slurry with constant stirring (350 rpm) for 30 minutes. Following treatment, one layer of enamel was removed from each specimen and analyzed for fluoride and calcium. The pre-treatment fluoride (indigenous) level of each specimen was subtracted from the post-treatment value to determine the change in enamel fluoride due to the test treatment. The increase in the average fluoride uptake for treated enamel cores was: 10,263 ± 295 ppm for the 970 ppm F- ion, Enamelon Preventive Treatment Gel; 7,016 ± 353 ppm for the 1150 ppm F- ion Enamelon Toothpaste; 4,138 ± 120 ppm for the 5000 ppm F- ion, NaF prescription dentifrice "A" with TCP; 3801 ± 121 ppm for the 5000 ppm F- ion, NaF prescription dentifrice "B" with TCP; 2,647 ± 57 ppm for the 1100 ppm F- ion, SnF2 OTC toothpaste; 1470 ± 40 ppm for the 1150 ppm F- ion, NaF OTC toothpaste; and 316 ± 9 ppm for the 900 ppm F- ion, NaF paste with CPP-ACP. The differences among all the products tested were statistically significant (p < 0.05), except for the two 5000 ppm F- ion products with TCP that were not statistically different from one another, and the 900 ppm F ion, NaF paste with CPP-ACP that was not statistically different from the negative water control. The Enamelon products (970 ppm and 150 ppm F ion, SnF2OTC dentifrices) delivering ACP provide statistically significantly more fluoride to incipient enamel lesions than two prescription strength 5000 ppm F- ion toothpastes containing TCP, the 900 ppm F- ion prescription paste containing CPP-ACP, and the other OTC toothpastes compared in this study.
Magnetic field errors tolerances of Nuclotron booster
NASA Astrophysics Data System (ADS)
Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet
2018-04-01
Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.
An extended Reed Solomon decoder design
NASA Technical Reports Server (NTRS)
Chen, J.; Owsley, P.; Purviance, J.
1991-01-01
It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.
Hardware Implementation of Serially Concatenated PPM Decoder
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael
2009-01-01
A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.
Cheng, Sen; Sabes, Philip N
2007-04-01
The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.
ERIC Educational Resources Information Center
Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene
2009-01-01
An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…
Effect of Error Augmentation on Brain Activation and Motor Learning of a Complex Locomotor Task
Marchal-Crespo, Laura; Michels, Lars; Jaeger, Lukas; López-Olóriz, Jorge; Riener, Robert
2017-01-01
Up to date, the functional gains obtained after robot-aided gait rehabilitation training are limited. Error augmenting strategies have a great potential to enhance motor learning of simple motor tasks. However, little is known about the effect of these error modulating strategies on complex tasks, such as relearning to walk after a neurologic accident. Additionally, neuroimaging evaluation of brain regions involved in learning processes could provide valuable information on behavioral outcomes. We investigated the effect of robotic training strategies that augment errors—error amplification and random force disturbance—and training without perturbations on brain activation and motor learning of a complex locomotor task. Thirty-four healthy subjects performed the experiment with a robotic stepper (MARCOS) in a 1.5 T MR scanner. The task consisted in tracking a Lissajous figure presented on a display by coordinating the legs in a gait-like movement pattern. Behavioral results showed that training without perturbations enhanced motor learning in initially less skilled subjects, while error amplification benefited better-skilled subjects. Training with error amplification, however, hampered transfer of learning. Randomly disturbing forces induced learning and promoted transfer in all subjects, probably because the unexpected forces increased subjects' attention. Functional MRI revealed main effects of training strategy and skill level during training. A main effect of training strategy was seen in brain regions typically associated with motor control and learning, such as, the basal ganglia, cerebellum, intraparietal sulcus, and angular gyrus. Especially, random disturbance and no perturbation lead to stronger brain activation in similar brain regions than error amplification. Skill-level related effects were observed in the IPS, in parts of the superior parietal lobe (SPL), i.e., precuneus, and temporal cortex. These neuroimaging findings indicate that gait-like motor learning depends on interplay between subcortical, cerebellar, and fronto-parietal brain regions. An interesting observation was the low activation observed in the brain's reward system after training with error amplification compared to training without perturbations. Our results suggest that to enhance learning of a locomotor task, errors should be augmented based on subjects' skill level. The impacts of these strategies on motor learning, brain activation, and motivation in neurological patients need further investigation. PMID:29021739
Nonconvergence of the Wang-Landau algorithms with multiple random walkers.
Belardinelli, R E; Pereyra, V D
2016-05-01
This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
CONTEXTUAL INTERFERENCE AND INTROVERSION/EXTRAVERSION IN MOTOR LEARNING.
Meira, Cassio M; Fairbrother, Jeffrey T; Perez, Carlos R
2015-10-01
The Introversion/Extraversion dimension may interact with contextual interference, as random and blocked practice schedules imply distinct levels of variation. This study investigated the effect of different practice schedules in the acquisition of a motor skill in extraverts and introverts. Forty male undergraduate students (M = 24.3 yr., SD = 5.6) were classified as extraverts (n = 20) and introverts (n = 20) by the Eysenck Personality Questionnaire and allocated in one of two practice schedules with different levels of contextual interference: blocked (low contextual interference) and random (high contextual interference). Half of each group was assigned to a blocked practice schedule, and the other half was assigned to a random practice schedule. The design had two phases: acquisition and transfer (5 min. and 24 hr.). The participants learned variations of a sequential timing keypressing task. Each variation required the same sequence but different timing; three variations were used in acquisition, and one variation of intermediate length was used in transfer. Results for absolute error and overall timing error (root mean square error) indicated that the contextual interference effect was more pronounced for introverts. In addition, introverts who practiced according to the blocked schedule committed more errors during the 24-hr. transfer, suggesting that introverts did not appear to be challenged by a low contextual interference practice schedule.
ERIC Educational Resources Information Center
Weiss, Michael J.; Lockwood, J. R.; McCaffrey, Daniel F.
2016-01-01
In the "individually randomized group treatment" (IRGT) experimental design, individuals are first randomly assigned to a treatment arm or a control arm, but then within each arm, are grouped together (e.g., within classrooms/schools, through shared case managers, in group therapy sessions, through shared doctors, etc.) to receive…
A method to estimate the effect of deformable image registration uncertainties on daily dose mapping
Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin
2012-01-01
Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766
NASA Astrophysics Data System (ADS)
Davis, A. B.; Qu, Z.
2014-12-01
The main goal of NASA's OCO-2 mission is to perform XCO2 column measurements from space with an unprecedented (~1 ppm) precision and accuracy that will enable modelers to globally map CO2 sources and sinks. To achieve this goal, the mission is critically dependent on XCO2product validation that, in turn, is highly dependent on successful use of OCO-2's "target mode" data acquisition. In target mode, OCO-2 rotates in such a way that, as long as it is above the horizon, it looks at a Total Carbon Column Observing Network (TCCON) station equipped with a powerful Fourier Transform spectrometer. TCCON stations measure, among other things, XCO2by looking straight at the Sun. This translates to a far simpler forward model for TCCON than for OCO-2. In the ideal world, OCO-2's spectroscopic signals result from the cumulative gaseous absorption for one direct transmission of sunlight to the ground (like for TCCON), followed by one diffuse reflection, and one direct transmission to the instrument—at a variety of viewing angles in traget mode. In the real world, all manner of multiple surface reflections and/or scatterings contribute to the signal. See figure. In the idealized world of the OCO-2 operational forward model (used in nadir, glint and target modes), the horizontal variability of the scattering atmosphere and reflecting surface are ignored, leading to the adoption of a 1D vector radiative transfer (vRT) model. This is the source of forward model error that we are investigating, with a focus on target mode. In principle, atmospheric variability in the horizontal plane—largely due to clouds—can be avoided by careful screening. Also, it is straightforward to account for angular variability of the surface reflection model in the 1D vRT framework. But it is not clear how unavoidable horizontal variations of the surface reflectivity affects the OCO-2 signal, even if the reflection was isotropic (Lambertian). To characterize this OCO-2 "adjacency" effect, we use a simple surface variability model with a single spatial frequency in each direction, and a single albedo contrast at a time for realistic aerosol and gaseous profiles. This specific 3D RT error is compared with other documented forward model errors and translated into XCO2 error in ppm, for programatic consideration and eventual mitigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papantoni-Kazakos, P.; Paterakis, M.
1988-07-01
For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2016-11-01
Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)
2001-01-01
A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.
Ownsworth, Tamara; Fleming, Jennifer; Tate, Robyn; Shum, David H K; Griffin, Janelle; Schmidt, Julia; Lane-Brown, Amanda; Kendall, Melissa; Chevignard, Mathilde
2013-11-05
Poor skills generalization poses a major barrier to successful outcomes of rehabilitation after traumatic brain injury (TBI). Error-based learning (EBL) is a relatively new intervention approach that aims to promote skills generalization by teaching people internal self-regulation skills, or how to anticipate, monitor and correct their own errors. This paper describes the protocol of a study that aims to compare the efficacy of EBL and errorless learning (ELL) for improving error self-regulation, behavioral competency, awareness of deficits and long-term outcomes after TBI. This randomized, controlled trial (RCT) has two arms (EBL and ELL); each arm entails 8 × 2 h training sessions conducted within the participants' homes. The first four sessions involve a meal preparation activity, and the final four sessions incorporate a multitasking errand activity. Based on a sample size estimate, 135 participants with severe TBI will be randomized into either the EBL or ELL condition. The primary outcome measure assesses error self-regulation skills on a task related to but distinct from training. Secondary outcomes include measures of self-monitoring and self-regulation, behavioral competency, awareness of deficits, role participation and supportive care needs. Assessments will be conducted at pre-intervention, post-intervention, and at 6-months post-intervention. This study seeks to determine the efficacy and long-term impact of EBL for training internal self-regulation strategies following severe TBI. In doing so, the study will advance theoretical understanding of the role of errors in task learning and skills generalization. EBL has the potential to reduce the length and costs of rehabilitation and lifestyle support because the techniques could enhance generalization success and lifelong application of strategies after TBI. ACTRN12613000585729.
Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J
2018-01-01
Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.
Robust Tomography using Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Silva, Marcus; Kimmel, Shelby; Johnson, Blake; Ryan, Colm; Ohki, Thomas
2013-03-01
Conventional randomized benchmarking (RB) can be used to estimate the fidelity of Clifford operations in a manner that is robust against preparation and measurement errors -- thus allowing for a more accurate and relevant characterization of the average error in Clifford gates compared to standard tomography protocols. Interleaved RB (IRB) extends this result to the extraction of error rates for individual Clifford gates. In this talk we will show how to combine multiple IRB experiments to extract all information about the unital part of any trace preserving quantum process. Consequently, one can compute the average fidelity to any unitary, not just the Clifford group, with tighter bounds than IRB. Moreover, the additional information can be used to design improvements in control. MS, BJ, CR and TO acknowledge support from IARPA under contract W911NF-10-1-0324.