Kong, Yong-Ku; Lee, Inseok; Jung, Myung-Chul; Song, Young-Woong
2011-05-01
This study evaluated the effects of age (20s and 60s), viewing distance (50 cm, 200 cm), display type (paper, monitor), font type (Gothic, Ming), colour contrast (black letters on white background, white letters on black background) and number of syllables (one, two) on the legibility of Korean characters by using the four legibility measures (minimum letter size for 100% correctness, maximum letter size for 0% correctness, minimum letter size for the least discomfort and maximum letter size for the most discomfort). Ten subjects in each age group read the four letters presented on a slide (letter size varied from 80 pt to 2 pt). Subjects also subjectively rated the reading discomfort of the letters on a 4-point scale (1 = no discomfort, 4 = most discomfort). According to the ANOVA procedure, age, viewing distance and font type significantly affected the four dependent variables (p < 0.05), while the main effect of colour contrast was not statistically significant for any measures. Two-syllable letters had smaller letters than one-syllable letters in the two correctness measures. The younger group could see letter sizes two times smaller than the old group could and the viewing distance of 50 cm showed letters about three times smaller than those at a 200 cm viewing distance. The Gothic fonts were smaller than the Ming fonts. Monitors were smaller than paper for correctness and maximum letter size for the most discomfort. From a comparison of the results for correctness and discomfort, people generally preferred larger letter sizes to those that they could read. The findings of this study may provide basic information for setting a global standard of letter size or font type to improve the legibility of characters written in Korean. STATEMENT OF RELEVANCE: Results obtained in this study will provide basic information and guidelines for setting standards of letter size and font type to improve the legibility of characters written in Korean. Also, the results might offer useful information for people who are working on design of visual displays.
Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng
2016-06-24
Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method. Copyright © 2016 Elsevier B.V. All rights reserved.
COBE ground segment gyro calibration
NASA Technical Reports Server (NTRS)
Freedman, I.; Kumar, V. K.; Rae, A.; Venkataraman, R.; Patt, F. S.; Wright, E. L.
1991-01-01
Discussed here is the calibration of the scale factors and rate biases for the Cosmic Background Explorer (COBE) spacecraft gyroscopes, with the emphasis on the adaptation for COBE of an algorithm previously developed for the Solar Maximum Mission. Detailed choice of parameters, convergence, verification, and use of the algorithm in an environment where the reference attitudes are determined form the Sun, Earth, and star observations (via the Diffuse Infrared Background Experiment (DIRBE) are considered. Results of some recent experiments are given. These include tests where the gyro rate data are corrected for the effect of the gyro baseplate temperature on the spacecraft electronics.
Corrected Implicit Monte Carlo
Cleveland, Mathew Allen; Wollaber, Allan Benton
2018-01-02
Here in this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle formore » frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. Finally, we present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.« less
Corrected implicit Monte Carlo
NASA Astrophysics Data System (ADS)
Cleveland, M. A.; Wollaber, A. B.
2018-04-01
In this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle for frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. We present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew Allen; Wollaber, Allan Benton
Here in this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle formore » frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. Finally, we present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.« less
Compton suppression gamma-counting: The effect of count rate
Millard, H.T.
1984-01-01
Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.
The Shock Pulse Index and Its Application in the Fault Diagnosis of Rolling Element Bearings
Sun, Peng; Liao, Yuhe; Lin, Jin
2017-01-01
The properties of the time domain parameters of vibration signals have been extensively studied for the fault diagnosis of rolling element bearings (REBs). Parameters like kurtosis and Envelope Harmonic-to-Noise Ratio are the most widely applied in this field and some important progress has been made. However, since only one-sided information is contained in these parameters, problems still exist in practice when the signals collected are of complicated structure and/or contaminated by strong background noises. A new parameter, named Shock Pulse Index (SPI), is proposed in this paper. It integrates the mutual advantages of both the parameters mentioned above and can help effectively identify fault-related impulse components under conditions of interference of strong background noises, unrelated harmonic components and random impulses. The SPI optimizes the parameters of Maximum Correlated Kurtosis Deconvolution (MCKD), which is used to filter the signals under consideration. Finally, the transient information of interest contained in the filtered signal can be highlighted through demodulation with the Teager Energy Operator (TEO). Fault-related impulse components can therefore be extracted accurately. Simulations show the SPI can correctly indicate the fault impulses under the influence of strong background noises, other harmonic components and aperiodic impulse and experiment analyses verify the effectiveness and correctness of the proposed method. PMID:28282883
Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI.
Heußer, Thorsten; Mann, Philipp; Rank, Christopher M; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc; Freitag, Martin T
2017-01-01
Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient's diagnosis. Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts.
Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI
Rank, Christopher M.; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A.; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc
2017-01-01
Objectives Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Methods Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. Results The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient’s diagnosis. Conclusion Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts. PMID:28817656
Average luminosity distance in inhomogeneous universes
NASA Astrophysics Data System (ADS)
Kostov, Valentin Angelov
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
NASA Astrophysics Data System (ADS)
Shakeel, Hira; Haq, S. U.; Aisha, Ghulam; Nadeem, Ali
2017-06-01
The quantitative analysis of the standard aluminum-silicon alloy has been performed using calibration free laser induced breakdown spectroscopy (CF-LIBS). The plasma was produced using the fundamental harmonic (1064 nm) of the Nd: YAG laser and the emission spectra were recorded at 3.5 μs detector gate delay. The qualitative analysis of the emission spectra confirms the presence of Mg, Al, Si, Ti, Mn, Fe, Ni, Cu, Zn, Sn, and Pb in the alloy. The background subtracted and self-absorption corrected emission spectra were used for the estimation of plasma temperature as 10 100 ± 300 K. The plasma temperature and self-absorption corrected emission lines of each element have been used for the determination of concentration of each species present in the alloy. The use of corrected emission intensities and accurate evaluation of plasma temperature yield reliable quantitative analysis up to a maximum 2.2% deviation from reference sample concentration.
Minamimoto, Ryogo; Mitsumoto, Takuya; Miyata, Yoko; Sunaoka, Fumio; Morooka, Miyako; Okasaki, Momoko; Iagaru, Andrei; Kubota, Kazuo
2016-02-01
This study evaluated the potential of Q.Freeze algorithm for reducing motion artifacts, in comparison with ungated imaging (UG) and respiratory-gated imaging (RG). Twenty-nine patients with 53 lesions who had undergone RG F-FDG PET/CT were included in this study. Using PET list mode data, five series of PET images [UG, RG, and QF images with an acquisition duration of 3 min (QF3), 5 min (QF5), and 10 min (QF10)] were reconstructed retrospectively. The image quality was evaluated first. Next, quantitative metrics [maximum standardized uptake value (SUVmax), mean standardized uptake value (SUVmean), SD, metabolic tumor volume, signal to noise ratio, or lesion to background ratio] were calculated for the liver, background, and each lesion, and the results were compared across the series. QF10 and QF5 showed better image quality compared with all other images. SUVmax in the liver, background, and lesions was lower with QF10 and QF5 than with the others, but there were no statistically significant differences in SUVmean and the lesion to background ratios. The SD with UG and RG was significantly higher than that with QF5 and QF10. The metabolic tumor volume in QF3 and QF5 was significantly lower than that in UG. The Q.Freeze algorithm can improve the quality of PET imaging compared with RG and UG.
Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei
2010-08-01
During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.
A comprehensive numerical analysis of background phase correction with V-SHARP.
Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand
2017-04-01
Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Object tracking algorithm based on the color histogram probability distribution
NASA Astrophysics Data System (ADS)
Li, Ning; Lu, Tongwei; Zhang, Yanduo
2018-04-01
In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.
40 CFR 1065.650 - Emission calculations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... following sequence of preliminary calculations on recorded concentrations: (i) Correct all THC and CH4.... (iii) Calculate all THC and NMHC concentrations, including dilution air background concentrations, as... NMHC to background corrected mass of THC. If the background corrected mass of NMHC is greater than 0.98...
An empirical correction for moderate multiple scattering in super-heterodyne light scattering.
Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas
2017-05-28
Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.
Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W
2012-09-07
A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Dilution air background emission...
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Dilution air background emission...
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Dilution air background emission...
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Dilution air background emission...
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Dilution air background emission...
Ellipsoidal corrections for geoid undulation computations using gravity anomalies in a cap
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1981-01-01
Ellipsoidal correction terms have been derived for geoid undulation computations when the Stokes equation using gravity anomalies in a cap is combined with potential coefficient information. The correction terms are long wavelength and depend on the cap size in which its gravity anomalies are given. Using the regular Stokes equation, the maximum correction for a cap size of 20 deg is -33 cm, which reduces to -27 cm when the Stokes function is modified by subtracting the value of the Stokes function at the cap radius. Ellipsoidal correction terms were also derived for the well-known Marsh/Chang geoids. When no gravity was used, the correction could reach 101 cm, while for a cap size of 20 deg the maximum correction was -45 cm. Global correction maps are given for a number of different cases. For work requiring accurate geoid computations these correction terms should be applied.
Simple automatic strategy for background drift correction in chromatographic data analysis.
Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin
2016-06-03
Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.
Analytical electron microscopy in mineralogy; exsolved phases in pyroxenes
Nord, G.L.
1982-01-01
Analytical scanning transmission electron microscopy has been successfully used to characterize the structure and composition of lamellar exsolution products in pyroxenes. At operating voltages of 100 and 200 keV, microanalytical techniques of x-ray energy analysis, convergent-beam electron diffraction, and lattice imaging have been used to chemically and structurally characterize exsolution lamellae only a few unit cells wide. Quantitative X-ray energy analysis using ratios of peak intensities has been adopted for the U.S. Geological Survey AEM in order to study the compositions of exsolved phases and changes in compositional profiles as a function of time and temperature. The quantitative analysis procedure involves 1) removal of instrument-induced background, 2) reduction of contamination, and 3) measurement of correction factors obtained from a wide range of standard compositions. The peak-ratio technique requires that the specimen thickness at the point of analysis be thin enough to make absorption corrections unnecessary (i.e., to satisfy the "thin-foil criteria"). In pyroxenes, the calculated "maximum thicknesses" range from 130 to 1400 nm for the ratios Mg/Si, Fe/Si, and Ca/Si; these "maximum thicknesses" have been contoured in pyroxene composition space as a guide during analysis. Analytical spatial resolutions of 50-100 nm have been achieved in AEM at 200 keV from the composition-profile studies, and analytical reproducibility in AEM from homogeneous pyroxene standards is ?? 1.5 mol% endmember. ?? 1982.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less
Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin
2013-08-09
Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.
Ramsey, Elijah W.; Nelson, G.
2005-01-01
To maximize the spectral distinctiveness (information) of the canopy reflectance, an atmospheric correction strategy was implemented to provide accurate estimates of the intrinsic reflectance from the Earth Observing 1 (EO1) satellite Hyperion sensor signal. In rendering the canopy reflectance, an estimate of optical depth derived from a measurement of downwelling irradiance was used to drive a radiative transfer simulation of atmospheric scattering and attenuation. During the atmospheric model simulation, the input whole-terrain background reflectance estimate was changed to minimize the differences between the model predicted and the observed canopy reflectance spectra at 34 sites. Lacking appropriate spectrally invariant scene targets, inclusion of the field and predicted comparison maximized the model accuracy and, thereby, the detail and precision in the canopy reflectance necessary to detect low percentage occurrences of invasive plants. After accounting for artifacts surrounding prominent absorption features from about 400nm to 1000nm, the atmospheric adjustment strategy correctly explained 99% of the observed canopy reflectance spectra variance. Separately, model simulation explained an average of 88%??9% of the observed variance in the visible and 98% ?? 1% in the near-infrared wavelengths. In the 34 model simulations, maximum differences between the observed and predicted reflectances were typically less than ?? 1% in the visible; however, maximum reflectance differences higher than ?? 1.6% (?2.3%) at more than a few wavelengths were observed at three sites. In the near-infrared wavelengths, maximum reflectance differences remained less than ??3% for 68% of the comparisons (??1 standard deviation) and less than ??6% for 95% of the comparisons (??2 standard deviation). Higher reflectance differences in the visible and near-infrared wavelengths were most likely associated with problems in the comparison, not in the model generation. ?? 2005 US Government.
Ding, Liang-Hao; Xie, Yang; Park, Seongmi; Xiao, Guanghua; Story, Michael D.
2008-01-01
Despite the tremendous growth of microarray usage in scientific studies, there is a lack of standards for background correction methodologies, especially in single-color microarray platforms. Traditional background subtraction methods often generate negative signals and thus cause large amounts of data loss. Hence, some researchers prefer to avoid background corrections, which typically result in the underestimation of differential expression. Here, by utilizing nonspecific negative control features integrated into Illumina whole genome expression arrays, we have developed a method of model-based background correction for BeadArrays (MBCB). We compared the MBCB with a method adapted from the Affymetrix robust multi-array analysis algorithm and with no background subtraction, using a mouse acute myeloid leukemia (AML) dataset. We demonstrated that differential expression ratios obtained by using the MBCB had the best correlation with quantitative RT–PCR. MBCB also achieved better sensitivity in detecting differentially expressed genes with biological significance. For example, we demonstrated that the differential regulation of Tnfr2, Ikk and NF-kappaB, the death receptor pathway, in the AML samples, could only be detected by using data after MBCB implementation. We conclude that MBCB is a robust background correction method that will lead to more precise determination of gene expression and better biological interpretation of Illumina BeadArray data. PMID:18450815
NASA Astrophysics Data System (ADS)
Jentzen, Walter
2010-04-01
The use of recovery coefficients (RCs) in 124I PET lesion imaging is a simple method to correct the imaged activity concentration (AC) primarily for the partial-volume effect and, to a minor extent, for the prompt gamma coincidence effect. The aim of this phantom study was to experimentally investigate a number of various factors affecting the 124I RCs. Three RC-based correction approaches were considered. These approaches differ with respect to the volume of interest (VOI) drawn, which determines the imaged AC and the RCs: a single voxel VOI containing the maximum value (maximum RC), a spherical VOI with a diameter of the scanner resolution (resolution RC) and a VOI equaling the physical object volume (isovolume RC). Measurements were performed using mainly a stand-alone PET scanner (EXACT HR+) and a latest-generation PET/CT scanner (BIOGRAPH mCT). The RCs were determined using a cylindrical phantom containing spheres or rotational ellipsoids and were derived from images acquired with a reference acquisition protocol. For each type of RC, the influence of the following factors on the RC was assessed: object shape, background activity spill in and iterative image reconstruction parameters. To evaluate the robustness of the RC-based correction approaches, the percentage deviation between RC-corrected and true ACs was determined from images acquired with a clinical acquisition protocol of different AC regimes. The observed results of the shape and spill-in effects were compared with simulation data derived from a convolution-based model. The study demonstrated that the shape effect was negligible and, therefore, was in agreement with theoretical expectations. In contradiction to the simulation results, the observed spill-in effect was unexpectedly small. To avoid variations in the determination of RCs due to reconstruction parameter changes, image reconstruction with a pixel length of about one-third or less of the scanner resolution and an OSEM 1 × 32 algorithm or one with somewhat higher number of effective iterations are recommended. Using the clinical acquisition protocol, the phantom study indicated that the resolution- or isovolume-based recovery-correction approaches appeared to be more appropriate to recover the ACs from patient data; however, the application of the three RC-based correction approaches to small lesions containing low ACs was, in particular, associated with large underestimations. The phantom study had several limitations, which were discussed in detail.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Li, Roger W.; MacKeben, Manfred; Chat, Sandy W.; Kumar, Maya; Ngo, Charlie; Levi, Dennis M.
2010-01-01
Background Much previous work on how normal aging affects visual enumeration has been focused on the response time required to enumerate, with unlimited stimulus duration. There is a fundamental question, not yet addressed, of how many visual items the aging visual system can enumerate in a “single glance”, without the confounding influence of eye movements. Methodology/Principal Findings We recruited 104 observers with normal vision across the age span (age 21–85). They were briefly (200 ms) presented with a number of well- separated black dots against a gray background on a monitor screen, and were asked to judge the number of dots. By limiting the stimulus presentation time, we can determine the maximum number of visual items an observer can correctly enumerate at a criterion level of performance (counting threshold, defined as the number of visual items at which ≈63% correct rate on a psychometric curve), without confounding by eye movements. Our findings reveal a 30% decrease in the mean counting threshold of the oldest group (age 61–85: ∼5 dots) when compared with the youngest groups (age 21–40: 7 dots). Surprisingly, despite decreased counting threshold, on average counting accuracy function (defined as the mean number of dots reported for each number tested) is largely unaffected by age, reflecting that the threshold loss can be primarily attributed to increased random errors. We further expanded this interesting finding to show that both young and old adults tend to over-count small numbers, but older observers over-count more. Conclusion/Significance Here we show that age reduces the ability to correctly enumerate in a glance, but the accuracy (veridicality), on average, remains unchanged with advancing age. Control experiments indicate that the degraded performance cannot be explained by optical, retinal or other perceptual factors, but is cortical in origin. PMID:20976149
Quan, H T
2014-06-01
We study the maximum efficiency of a heat engine based on a small system. It is revealed that due to the finiteness of the system, irreversibility may arise when the working substance contacts with a heat reservoir. As a result, there is a working-substance-dependent correction to the Carnot efficiency. We derive a general and simple expression for the maximum efficiency of a Carnot cycle heat engine in terms of the relative entropy. This maximum efficiency approaches the Carnot efficiency asymptotically when the size of the working substance increases to the thermodynamic limit. Our study extends Carnot's result of the maximum efficiency to an arbitrary working substance and elucidates the subtlety of thermodynamic laws in small systems.
Nonthermal production of dark matter from primordial black holes
NASA Astrophysics Data System (ADS)
Allahverdi, Rouzbeh; Dent, James; Osinski, Jacek
2018-03-01
We present a scenario for nonthermal production of dark matter from evaporation of primordial black holes. A period of very early matter domination leads to formation of black holes with a maximum mass of ≃2 ×108 g , whose subsequent evaporation prior to big bang nucleosynthesis can produce all of the dark matter in the Universe. We show that the correct relic abundance can be obtained in this way for thermally underproduced dark matter in the 100 GeV-10 TeV mass range. To achieve this, the scalar power spectrum at small scales relevant for black hole formation should be enhanced by a factor of O (105) relative to the scales accessible by the cosmic microwave background experiments.
The Impacts of Heating Strategy on Soil Moisture Estimation Using Actively Heated Fiber Optics.
Dong, Jianzhi; Agliata, Rosa; Steele-Dunne, Susan; Hoes, Olivier; Bogaard, Thom; Greco, Roberto; van de Giesen, Nick
2017-09-13
Several recent studies have highlighted the potential of Actively Heated Fiber Optics (AHFO) for high resolution soil moisture mapping. In AHFO, the soil moisture can be calculated from the cumulative temperature ( T cum ), the maximum temperature ( T max ), or the soil thermal conductivity determined from the cooling phase after heating ( λ ). This study investigates the performance of the T cum , T max and λ methods for different heating strategies, i.e., differences in the duration and input power of the applied heat pulse. The aim is to compare the three approaches and to determine which is best suited to field applications where the power supply is limited. Results show that increasing the input power of the heat pulses makes it easier to differentiate between dry and wet soil conditions, which leads to an improved accuracy. Results suggest that if the power supply is limited, the heating strength is insufficient for the λ method to yield accurate estimates. Generally, the T cum and T max methods have similar accuracy. If the input power is limited, increasing the heat pulse duration can improve the accuracy of the AHFO method for both of these techniques. In particular, extending the heating duration can significantly increase the sensitivity of T cum to soil moisture. Hence, the T cum method is recommended when the input power is limited. Finally, results also show that up to 50% of the cable temperature change during the heat pulse can be attributed to soil background temperature, i.e., soil temperature changed by the net solar radiation. A method is proposed to correct this background temperature change. Without correction, soil moisture information can be completely masked by the background temperature error.
The Impacts of Heating Strategy on Soil Moisture Estimation Using Actively Heated Fiber Optics
Dong, Jianzhi; Agliata, Rosa; Steele-Dunne, Susan; Hoes, Olivier; Bogaard, Thom; Greco, Roberto; van de Giesen, Nick
2017-01-01
Several recent studies have highlighted the potential of Actively Heated Fiber Optics (AHFO) for high resolution soil moisture mapping. In AHFO, the soil moisture can be calculated from the cumulative temperature (Tcum), the maximum temperature (Tmax), or the soil thermal conductivity determined from the cooling phase after heating (λ). This study investigates the performance of the Tcum, Tmax and λ methods for different heating strategies, i.e., differences in the duration and input power of the applied heat pulse. The aim is to compare the three approaches and to determine which is best suited to field applications where the power supply is limited. Results show that increasing the input power of the heat pulses makes it easier to differentiate between dry and wet soil conditions, which leads to an improved accuracy. Results suggest that if the power supply is limited, the heating strength is insufficient for the λ method to yield accurate estimates. Generally, the Tcum and Tmax methods have similar accuracy. If the input power is limited, increasing the heat pulse duration can improve the accuracy of the AHFO method for both of these techniques. In particular, extending the heating duration can significantly increase the sensitivity of Tcum to soil moisture. Hence, the Tcum method is recommended when the input power is limited. Finally, results also show that up to 50% of the cable temperature change during the heat pulse can be attributed to soil background temperature, i.e., soil temperature changed by the net solar radiation. A method is proposed to correct this background temperature change. Without correction, soil moisture information can be completely masked by the background temperature error. PMID:28902141
Colorimetric calibration of wound photography with off-the-shelf devices
NASA Astrophysics Data System (ADS)
Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.
2017-03-01
Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.
Drug exposure in register-based research—An expert-opinion based evaluation of methods
Taipale, Heidi; Koponen, Marjaana; Tolppanen, Anna-Maija; Hartikainen, Sirpa; Ahonen, Riitta; Tiihonen, Jari
2017-01-01
Background In register-based pharmacoepidemiological studies, construction of drug exposure periods from drug purchases is a major methodological challenge. Various methods have been applied but their validity is rarely evaluated. Our objective was to conduct an expert-opinion based evaluation of the correctness of drug use periods produced by different methods. Methods Drug use periods were calculated with three fixed methods: time windows, assumption of one Defined Daily Dose (DDD) per day and one tablet per day, and with PRE2DUP that is based on modelling of individual drug purchasing behavior. Expert-opinion based evaluation was conducted with 200 randomly selected purchase histories of warfarin, bisoprolol, simvastatin, risperidone and mirtazapine in the MEDALZ-2005 cohort (28,093 persons with Alzheimer’s disease). Two experts reviewed purchase histories and judged which methods had joined correct purchases and gave correct duration for each of 1000 drug exposure periods. Results The evaluated correctness of drug use periods was 70–94% for PRE2DUP, and depending on grace periods and time window lengths 0–73% for tablet methods, 0–41% for DDD methods and 0–11% for time window methods. The highest rate of evaluated correct solutions for each method class were observed for 1 tablet per day with 180 days grace period (TAB_1_180, 43–73%), and 1 DDD per day with 180 days grace period (1–41%). Time window methods produced at maximum only 11% correct solutions. The best performing fixed method TAB_1_180 reached highest correctness for simvastatin 73% (95% CI 65–81%) whereas 89% (95% CI 84–94%) of PRE2DUP periods were judged as correct. Conclusions This study shows inaccuracy of fixed methods and the urgent need for new data-driven methods. In the expert-opinion based evaluation, the lowest error rates were observed with data-driven method PRE2DUP. PMID:28886089
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
76 FR 74720 - Inflation Adjustment of Civil Monetary Penalties; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-01
... of Civil Monetary Penalties; Correction AGENCY: Federal Maritime Commission. ACTION: Correcting... maximum amount of each statutory civil penalty subject to Federal Maritime Commission jurisdiction, in accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act of 1990, as amended...
Observation-Corrected Precipitation Estimates in GEOS-5
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; Liu, Qing
2014-01-01
Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.
High-energy electrons from the muon decay in orbit: Radiative corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szafron, Robert; Czarnecki, Andrzej
2015-12-07
We determine the Ο(α) correction to the energy spectrum of electrons produced in the decay of muons bound in atoms. We focus on the high-energy end of the spectrum that constitutes a background for the muon-electron conversion and will be precisely measured by the upcoming experiments Mu2e and COMET. As a result, the correction suppresses the background by about 20%.
Ohlenforst, Barbara; Zekveld, Adriana A; Lunner, Thomas; Wendt, Dorothea; Naylor, Graham; Wang, Yang; Versfeld, Niek J; Kramer, Sophia E
2017-08-01
Previous research has reported effects of masker type and signal-to-noise ratio (SNR) on listening effort, as indicated by the peak pupil dilation (PPD) relative to baseline during speech recognition. At about 50% correct sentence recognition performance, increasing SNRs generally results in declining PPDs, indicating reduced effort. However, the decline in PPD over SNRs has been observed to be less pronounced for hearing-impaired (HI) compared to normal-hearing (NH) listeners. The presence of a competing talker during speech recognition generally resulted in larger PPDs as compared to the presence of a fluctuating or stationary background noise. The aim of the present study was to examine the interplay between hearing-status, a broad range of SNRs corresponding to sentence recognition performance varying from 0 to 100% correct, and different masker types (stationary noise and single-talker masker) on the PPD during speech perception. Twenty-five HI and 32 age-matched NH participants listened to sentences across a broad range of SNRs, masked with speech from a single talker (-25 dB to +15 dB SNR) or with stationary noise (-12 dB to +16 dB). Correct sentence recognition scores and pupil responses were recorded during stimulus presentation. With a stationary masker, NH listeners show maximum PPD across a relatively narrow range of low SNRs, while HI listeners show relatively large PPD across a wide range of ecological SNRs. With the single-talker masker, maximum PPD was observed in the mid-range of SNRs around 50% correct sentence recognition performance, while smaller PPDs were observed at lower and higher SNRs. Mixed-model ANOVAs revealed significant interactions between hearing-status and SNR on the PPD for both masker types. Our data show a different pattern of PPDs across SNRs between groups, which indicates that listening and the allocation of effort during listening in daily life environments may be different for NH and HI listeners. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography
Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.
2007-01-01
In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs
Esmonde-White, Francis W. L.; Esmonde-White, Karen A.; Morris, Michael D.
2010-01-01
Projective transformation is a mathematical correction (implemented in software) used in the remote imaging field to produce distortion-free images. We present the application of projective transformation to correct minor alignment and astigmatism distortions that are inherent in dispersive spectrographs. Patterned white-light images and neon emission spectra were used to produce registration points for the transformation. Raman transects collected on microscopy and fiber-optic systems were corrected using established methods and compared with the same transects corrected using the projective transformation. Even minor distortions have a significant effect on reproducibility and apparent fluorescence background complexity. Simulated Raman spectra were used to optimize the projective transformation algorithm. We demonstrate that the projective transformation reduced the apparent fluorescent background complexity and improved reproducibility of measured parameters of Raman spectra. Distortion correction using a projective transformation provides a major advantage in reducing the background fluorescence complexity even in instrumentation where slit-image distortions and camera rotation were minimized using manual or mechanical means. We expect these advantages should be readily applicable to other spectroscopic modalities using dispersive imaging spectrographs. PMID:21211158
Acquisition and processing of data for isotope-ratio-monitoring mass spectrometry
NASA Technical Reports Server (NTRS)
Ricci, M. P.; Merritt, D. A.; Freeman, K. H.; Hayes, J. M.
1994-01-01
Methods are described for continuous monitoring of signals required for precise analyses of 13C, 18O, and 15N in gas streams containing varying quantities of CO2 and N2. The quantitative resolution (i.e. maximum performance in the absence of random errors) of these methods is adequate for determination of isotope ratios with an uncertainty of one part in 10(5); the precision actually obtained is often better than one part in 10(4). This report describes data-processing operations including definition of beginning and ending points of chromatographic peaks and quantitation of background levels, allowance for effects of chromatographic separation of isotopically substituted species, integration of signals related to specific masses, correction for effects of mass discrimination, recognition of drifts in mass spectrometer performance, and calculation of isotopic delta values. Characteristics of a system allowing off-line revision of parameters used in data reduction are described and an algorithm for identification of background levels in complex chromatograms is outlined. Effects of imperfect chromatographic resolution are demonstrated and discussed and an approach to deconvolution of signals from coeluting substances described.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
NASA Astrophysics Data System (ADS)
Itoh, Naoki; Nozawa, Satoshi; Kohyama, Yasuharu
2000-04-01
We extend the formalism of relativistic thermal and kinematic Sunyaev-Zeldovich effects and include the polarization of the cosmic microwave background photons. We consider the situation of a cluster of galaxies moving with a velocity β≡v/c with respect to the cosmic microwave background radiation. In the present formalism, polarization of the scattered cosmic microwave background radiation caused by the proper motion of a cluster of galaxies is naturally derived as a special case of the kinematic Sunyaev-Zeldovich effect. The relativistic corrections are also included in a natural way. Our results are in complete agreement with the recent results of relativistic corrections obtained by Challinor, Ford, & Lasenby with an entirely different method, as well as the nonrelativistic limit obtained by Sunyaev & Zeldovich. The relativistic correction becomes significant in the Wien region.
McInnes, E F; Scudamore, C L
2014-08-17
Pathological evaluation of lesions caused directly by xenobiotic treatment must always take into account the recognition of background (incidental) findings. Background lesions can be congenital or hereditary, histological variations, changes related to trauma or normal aging and physiologic or hormonal changes. This review focuses on the importance and correct approach to recording of background changes and includes discussion on sources of variability in background changes, the correct use of terminology, the concept of thresholds, historical control data, diagnostic drift, blind reading of slides, scoring and artifacts. The review is illustrated with background lesions in Sprague Dawley and Wistar rats. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Borsody, J.
1976-01-01
Mathematical equations are derived by using the Maximum Principle to obtain the maximum payload capability of a reusable tug for planetary missions. The mathematical formulation includes correction for nodal precession of the space shuttle orbit. The tug performs this nodal correction in returning to this precessed orbit. The sample case analyzed represents an inner planet mission as defined by the declination (fixed) and right ascension of the outgoing asymptote and the mission energy. Payload capability is derived for a typical cryogenic tug and the sample case with and without perigee propulsion. Optimal trajectory profiles and some important orbital elements are also discussed.
Shinozaki, Kazuma; Zack, Jason W.; Richards, Ryan M.; ...
2015-07-22
The rotating disk electrode (RDE) technique is being extensively used as a screening tool to estimate the activity of novel PEMFC electrocatalysts synthesized in lab-scale (mg) quantities. Discrepancies in measured activity attributable to glassware and electrolyte impurity levels, as well as conditioning, protocols and corrections are prevalent in the literature. Moreover, the electrochemical response to a broad spectrum of commercially sourced perchloric acid and the effect of acid molarity on impurity levels and solution resistance were also assessed. Our findings reveal that an area specific activity (SA) exceeding 2.0 mA/cm 2 (20 mV/s, 25°C, 100 kPa, 0.1 M HClO 4)more » for polished poly-Pt is an indicator of impurity levels that do not impede the accurate measurement of the ORR activity of Pt based catalysts. After exploring various conditioning protocols to approach maximum utilization of the electrochemical area (ECA) and peak ORR activity without introducing catalyst degradation, an investigation of measurement protocols for ECA and ORR activity was conducted. Down-selected protocols were based on the criteria of reproducibility, duration of experiments, impurity effects and magnitude of pseudo-capacitive background correction. In sum, statistical reproducibility of ORR activity for poly-Pt and Pt supported on high surface area carbon was demonstrated.« less
Zhang, Yuzhong; Zhang, Yan
2016-07-01
In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.
NASA Astrophysics Data System (ADS)
Bezur, L.; Marshall, J.; Ottaway, J. M.
A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.
49 CFR 325.79 - Application of correction factors.
Code of Federal Regulations, 2011 CFR
2011-10-01
... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...
49 CFR 325.79 - Application of correction factors.
Code of Federal Regulations, 2010 CFR
2010-10-01
... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...
The indirect effects on the computation of geoid undulations
NASA Technical Reports Server (NTRS)
Wichiencharoen, C.
1982-01-01
The indirect effects on the geoid computation due to the second method of Helmert's condensation were studied. when Helmert's anomalies are used in Stokes's equation, there are three types of corrections to the free air geoid. The first correction, the indirect effect on geoid undulation due to the potential change in Helmert's reduction, had a maximum value of 0.51 meters in the test area covering the United States. The second correction, the attraction change effect on geoid undulation, had a maximum value of 9.50 meters when the 10 deg cap was used in Stokes' equation. The last correction, the secondary indirect effect on geoid undulatin, was found negligible in the test area. The corrections were applied to uncorrected free air geoid undulations at 65 Doppler stations in the test area and compared with the Doppler undulations. Based on the assumption that the Doppler coordinate system has a z shift of 4 meters with respect to the geocenter, these comparisons showed that the corrections presented in this study yielded improved values of gravimetric undulations.
Traumeel S® for pain relief following hallux valgus surgery: a randomized controlled trial
2010-01-01
Background In spite of recent advances in post-operative pain relief, pain following orthopedic surgery remains an ongoing challenge for clinicians. We examined whether a well known and frequently prescribed homeopathic preparation could mitigate post-operative pain. Method We performed a randomized, double blind, placebo-controlled trial to evaluate the efficacy of the homeopathic preparation Traumeel S® in minimizing post-operative pain and analgesic consumption following surgical correction of hallux valgus. Eighty consecutive patients were randomized to receive either Traumeel tablets or an indistinguishable placebo, and took primary and rescue oral analgesics as needed. Maximum numerical pain scores at rest and consumption of oral analgesics were recorded on day of surgery and for 13 days following surgery. Results Traumeel was not found superior to placebo in minimizing pain or analgesic consumption over the 14 days of the trial, however a transient reduction in the daily maximum post-operative pain score favoring the Traumeel arm was observed on the day of surgery, a finding supported by a treatment-time interaction test (p = 0.04). Conclusions Traumeel was not superior to placebo in minimizing pain or analgesic consumption over the 14 days of the trial. A transient reduction in the daily maximum post-operative pain score on the day of surgery is of questionable clinical importance. Trial Registration This study was registered at ClinicalTrials.gov. # NCT00279513 PMID:20380750
Malmberg, Catarina; Ripa, Rasmus S; Johnbeck, Camilla B; Knigge, Ulrich; Langer, Seppo W; Mortensen, Jann; Oturai, Peter; Loft, Annika; Hag, Anne Mette; Kjær, Andreas
2015-12-01
The somatostatin receptor subtype 2 is expressed on macrophages, an abundant cell type in the atherosclerotic plaque. Visualization of somatostatin receptor subtype 2, for oncologic purposes, is frequently made using the DOTA-derived somatostatin analogs DOTATOC or DOTATATE for PET. We aimed to compare the uptake of the PET tracers (68)Ga-DOTATOC and (64)Cu-DOTATATE in large arteries, in the assessment of atherosclerosis by noninvasive imaging technique, combining PET and CT. Further, the correlation of uptake and cardiovascular risk factors was investigated. Sixty consecutive patients with neuroendocrine tumors underwent both (68)Ga-DOTATOC and (64)Cu-DOTATATE PET/CT scans, in random order. For each scan, the maximum and mean standardized uptake values (SUVs) were calculated in 5 arterial segments. In addition, the blood-pool-corrected target-to-background ratio was calculated. Uptake of the tracers was correlated with cardiovascular risk factors collected from medical records. We found detectable uptake of both tracers in all arterial segments studied. Uptake of (64)Cu-DOTATATE was significantly higher than (68)Ga-DOTATOC in the vascular regions both when calculated as maximum and mean uptake. There was a significant association between Framingham risk score and the overall maximum uptake of (64)Cu-DOTATATE using SUV (r = 0.4; P = 0.004) as well as target-to-background ratio (r = 0.3; P = 0.04), whereas no association was found with (68)Ga-DOTATOC. The association of risk factors and maximum SUV of (64)Cu-DOTATATE was found driven by body mass index, smoking, diabetes, and coronary calcium score (P < 0.001, P = 0.01, P = 0.005, and P = 0.03, respectively). In a series of oncologic patients, vascular uptake of (68)Ga-DOTATOC and (64)Cu-DOTATATE was found, with highest uptake of the latter. Uptake of (64)Cu-DOTATATE, but not of (68)Ga-DOTATOC, was correlated with cardiovascular risk factors, suggesting a potential role for (64)Cu-DOTATATE in the assessment of atherosclerosis. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Technical Reports Server (NTRS)
Markey, Melvin F.
1959-01-01
A theory is derived for determining the loads and motions of a deeply immersed prismatic body. The method makes use of a two-dimensional water-mass variation and an aspect-ratio correction for three-dimensional flow. The equations of motion are generalized by using a mean value of the aspect-ratio correction and by assuming a variation of the two-dimensional water mass for the deeply immersed body. These equations lead to impact coefficients that depend on an approach parameter which, in turn, depends upon the initial trim and flight-path angles. Comparison of experiment with theory is shown at maximum load and maximum penetration for the flat-bottom (0 deg dead-rise angle) model with bean-loading coefficients from 36.5 to 133.7 over a wide range of initial conditions. A dead-rise angle correction is applied and maximum-load data are compared with theory for the case of a model with 300 dead-rise angle and beam-loading coefficients from 208 to 530.
Regional geoid computation by least squares modified Hotine's formula with additive corrections
NASA Astrophysics Data System (ADS)
Märdla, Silja; Ellmann, Artu; Ågren, Jonas; Sjöberg, Lars E.
2018-03-01
Geoid and quasigeoid modelling from gravity anomalies by the method of least squares modification of Stokes's formula with additive corrections is adapted for the usage with gravity disturbances and Hotine's formula. The biased, unbiased and optimum versions of least squares modification are considered. Equations are presented for the four additive corrections that account for the combined (direct plus indirect) effect of downward continuation (DWC), topographic, atmospheric and ellipsoidal corrections in geoid or quasigeoid modelling. The geoid or quasigeoid modelling scheme by the least squares modified Hotine formula is numerically verified, analysed and compared to the Stokes counterpart in a heterogeneous study area. The resulting geoid models and the additive corrections computed both for use with Stokes's or Hotine's formula differ most in high topography areas. Over the study area (reaching almost 2 km in altitude), the approximate geoid models (before the additive corrections) differ by 7 mm on average with a 3 mm standard deviation (SD) and a maximum of 1.3 cm. The additive corrections, out of which only the DWC correction has a numerically significant difference, improve the agreement between respective geoid or quasigeoid models to an average difference of 5 mm with a 1 mm SD and a maximum of 8 mm.
García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M
2018-01-01
Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.
Erny, Guillaume L; Acunha, Tanize; Simó, Carolina; Cifuentes, Alejandro; Alves, Arminda
2017-04-07
Separation techniques hyphenated with high-resolution mass spectrometry have been a true revolution in analytical separation techniques. Such instruments not only provide unmatched resolution, but they also allow measuring the peaks accurate masses that permit identifying monoisotopic formulae. However, data files can be large, with a major contribution from background noise and background ions. Such unnecessary contribution to the overall signal can hide important features as well as decrease the accuracy of the centroid determination, especially with minor features. Thus, noise and baseline correction can be a valuable pre-processing step. The methodology that is described here, unlike any other approach, is used to correct the original dataset with the MS scans recorded as profiles spectrum. Using urine metabolic studies as examples, we demonstrate that this thorough correction reduces the data complexity by more than 90%. Such correction not only permits an improved visualisation of secondary peaks in the chromatographic domain, but it also facilitates the complete assignment of each MS scan which is invaluable to detect possible comigration/coeluting species. Copyright © 2017 Elsevier B.V. All rights reserved.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
NASA Astrophysics Data System (ADS)
Kruger, Pamela C.; Parsons, Patrick J.
2007-03-01
Excessive exposure to aluminum (Al) can produce serious health consequences in people with impaired renal function, especially those undergoing hemodialysis. Al can accumulate in the brain and in bone, causing dialysis-related encephalopathy and renal osteodystrophy. Thus, dialysis patients are routinely monitored for Al overload, through measurement of their serum Al. Electrothermal atomic absorption spectrometry (ETAAS) is widely used for serum Al determination. Here, we assess the analytical performances of three ETAAS instruments, equipped with different background correction systems and heating arrangements, for the determination of serum Al. Specifically, we compare (1) a Perkin Elmer (PE) Model 3110 AAS, equipped with a longitudinally (end) heated graphite atomizer (HGA) and continuum-source (deuterium) background correction, with (2) a PE Model 4100ZL AAS equipped with a transversely heated graphite atomizer (THGA) and longitudinal Zeeman background correction, and (3) a PE Model Z5100 AAS equipped with a HGA and transverse Zeeman background correction. We were able to transfer the method for serum Al previously established for the Z5100 and 4100ZL instruments to the 3110, with only minor modifications. As with the Zeeman instruments, matrix-matched calibration was not required for the 3110 and, thus, aqueous calibration standards were used. However, the 309.3-nm line was chosen for analysis on the 3110 due to failure of the continuum background correction system at the 396.2-nm line. A small, seemingly insignificant overcorrection error was observed in the background channel on the 3110 instrument at the 309.3-nm line. On the 4100ZL, signal oscillation was observed in the atomization profile. The sensitivity, or characteristic mass ( m0), for Al at the 309.3-nm line on the 3110 AAS was found to be 12.1 ± 0.6 pg, compared to 16.1 ± 0.7 pg for the Z5100, and 23.3 ± 1.3 pg for the 4100ZL at the 396.2-nm line. However, the instrumental detection limits (3 SD) for Al were very similar: 3.0, 3.2, and 4.1 μg L - 1 for the Z5100, 4100ZL, and 3110, respectively. Serum Al method detection limits (3 SD) were 9.8, 6.9, and 7.3 μg L - 1 , respectively. Accuracy was assessed using archived serum (and plasma) reference materials from various external quality assessment schemes (EQAS). Values found with all three instruments were within the acceptable EQAS ranges. The data indicate that relatively modest ETAAS instrumentation equipped with continuum background correction is adequate for routine serum Al monitoring.
Holographic corrections to the Veneziano amplitude
NASA Astrophysics Data System (ADS)
Armoni, Adi; Ireson, Edwin
2017-08-01
We propose a holographic computation of the 2 → 2 meson scattering in a curved string background, dual to a QCD-like theory. We recover the Veneziano amplitude and compute a perturbative correction due to the background curvature. The result implies a small deviation from a linear trajectory, which is a requirement of the UV regime of QCD.
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.
Elementary review of electron microprobe techniques and correction requirements
NASA Technical Reports Server (NTRS)
Hart, R. K.
1968-01-01
Report contains requirements for correction of instrumented data on the chemical composition of a specimen, obtained by electron microprobe analysis. A condensed review of electron microprobe techniques is presented, including background material for obtaining X ray intensity data corrections and absorption, atomic number, and fluorescence corrections.
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2013-01-01
Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box. We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (E(sub max)) appropriate for quasi-parallel and quasi-perpendicular shocks and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
Navigator alignment using radar scan
Doerry, Armin W.; Marquette, Brandeis
2016-04-05
The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.
Time-of-day Corrections to Aircraft Noise Metrics
NASA Technical Reports Server (NTRS)
Clevenson, S. (Editor); Shepherd, W. T. (Editor)
1980-01-01
The historical and background aspects of time-of-day corrections as well as the evidence supporting these corrections are discussed. Health, welfare, and economic impacts, needs a criteria, and government policy and regulation, are also reported.
NASA Astrophysics Data System (ADS)
He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.
2018-02-01
A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Pechmann, J.C.; Nava, S.J.; Terra, F.M.; Bernier, J.C.
2007-01-01
The University of Utah Seismograph Stations (UUSS) earthquake catalogs for the Utah and Yellowstone National Park regions contain two types of size measurements: local magnitude (ML) and coda magnitude (MC), which is calibrated against ML. From 1962 through 1993, UUSS calculated ML values for southern and central Intermountain Seismic Belt earthquakes using maximum peak-to-peak (p-p) amplitudes on paper records from one to five Wood-Anderson (W-A) seismographs in Utah. For ML determinations of earthquakes since 1994, UUSS has utilized synthetic W-A seismograms from U.S. National Seismic Network and UUSS broadband digital telemetry stations in the region, which numbered 23 by the end of our study period on 30 June 2002. This change has greatly increased the percentage of earthquakes for which ML can be determined. It is now possible to determine ML for all M ???3 earthquakes in the Utah and Yellowstone regions and earthquakes as small as M <1 in some areas. To maintain continuity in the magnitudes in the UUSS earthquake catalogs, we determined empirical ML station corrections that minimize differences between MLs calculated from paper and synthetic W-A records. Application of these station corrections, in combination with distance corrections from Richter (1958) which have been in use at UUSS since 1962, produces ML values that do not show any significant distance dependence. ML determinations for the Utah and Yellowstone regions for 1981-2002 using our station corrections and Richter's distance corrections have provided a reliable data set for recalibrating the MC scales for these regions. Our revised ML values are consistent with available moment magnitude determinations for Intermountain Seismic Belt earthquakes. To facilitate automatic ML measurements, we analyzed the distribution of the times of maximum p-p amplitudes in synthetic W-A records. A 30-sec time window for maximum amplitudes, beginning 5 sec before the predicted Sg time, encompasses 95% of the maximum p-p amplitudes. In our judgment, this time window represents a good compromise between maximizing the chances of capturing the maximum amplitude and minimizing the risk of including other seismic events.
NASA Astrophysics Data System (ADS)
Hogan, Matthew John
A positron emission tomography system designed to perform high resolution imaging of small volumes has been characterized. Two large area planar detectors, used to detect the annihilation gamma rays, formed a large aperture stationary positron camera. The detectors were multiwire proportional chambers coupled to high density lead stack converters. Detector efficiency was 8%. The coincidence resolving time was 500 nsec. The maximum system sensitivity was 60 cps/(mu)Ci for a solid angle of acceptance of 0.74(pi) St. The maximum useful coincidence count rate was 1500 cps and was limited by electronic dead time. Image reconstruction was done by performing a 3-dimensional deconvolution using Fourier transform methods. Noise propagation during reconstruction was minimized by choosing a 'minimum norm' reconstructed image. In the stationary detector system (with a limited angle of acceptance for coincident events) statistical uncertainty in the data limited reconstruction in the direction normal to the detector surfaces. Data from a rotated phantom showed that detector rotation will correct this problem. Resolution was 4 mm in planes parallel to the detectors and (TURN)15 mm in the normal direction. Compton scattering of gamma rays within a source distribution was investigated using both simulated and measured data. Attenuation due to scatter was as high as 60%. For small volume imaging the Compton background was identified and an approximate correction was performed. A semiquantitative blood flow measurement to bone in the leg of a cat using the ('18)F('-) ion was performed. The results were comparable to investigations using more conventional techniques. Qualitative scans using ('18)F labelled deoxy -D-glucose to assess brain glucose metabolism in a rhesus monkey were also performed.
Model-based aberration correction in a closed-loop wavefront-sensor-less adaptive optics system.
Song, H; Fraanje, R; Schitter, G; Kroese, H; Vdovin, G; Verhaegen, M
2010-11-08
In many scientific and medical applications, such as laser systems and microscopes, wavefront-sensor-less (WFSless) adaptive optics (AO) systems are used to improve the laser beam quality or the image resolution by correcting the wavefront aberration in the optical path. The lack of direct wavefront measurement in WFSless AO systems imposes a challenge to achieve efficient aberration correction. This paper presents an aberration correction approach for WFSlss AO systems based on the model of the WFSless AO system and a small number of intensity measurements, where the model is identified from the input-output data of the WFSless AO system by black-box identification. This approach is validated in an experimental setup with 20 static aberrations having Kolmogorov spatial distributions. By correcting N=9 Zernike modes (N is the number of aberration modes), an intensity improvement from 49% of the maximum value to 89% has been achieved in average based on N+5=14 intensity measurements. With the worst initial intensity, an improvement from 17% of the maximum value to 86% has been achieved based on N+4=13 intensity measurements.
ERIC Educational Resources Information Center
California State Board of Corrections, Sacramento.
This package consists of an information booklet for job candidates preparing to take California's Corrections Officer Examination and a user's manual intended for those who will administer the examination. The candidate information booklet provides background information about the development of the Corrections Officer Examination, describes its…
Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity
ERIC Educational Resources Information Center
Simmons-Mackie, Nina; Damico, Jack S.
2008-01-01
Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…
Lyman alpha SMM/UVSP absolute calibration and geocoronal correction
NASA Technical Reports Server (NTRS)
Fontenla, Juan M.; Reichmann, Edwin J.
1987-01-01
Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-01-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311
Analytic Scattering and Refraction Models for Exoplanet Transit Spectra
NASA Astrophysics Data System (ADS)
Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.
2017-12-01
Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.
Atmospheric correction analysis on LANDSAT data over the Amazon region. [Manaus, Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dias, L. A. V.; Dossantos, J. R.; Formaggio, A. R.
1983-01-01
The Amazon Region natural resources were studied in two ways and compared. A LANDSAT scene and its attributes were selected, and a maximum likelihood classification was made. The scene was atmospherically corrected, taking into account Amazonic peculiarities revealed by (ground truth) of the same area, and the subsequent classification. Comparison shows that the classification improves with the atmospherically corrected images.
Parallel Low-Loss Measurement of Multiple Atomic Qubits
NASA Astrophysics Data System (ADS)
Kwon, Minho; Ebert, Matthew F.; Walker, Thad G.; Saffman, M.
2017-11-01
We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is <2 % and the initial hyperfine state is preserved with >98 % probability.
14 CFR 29.1043 - Cooling tests.
Code of Federal Regulations, 2012 CFR
2012-01-01
... be of the minimum grade approved for the engines, and the mixture settings must be those used in... factor (except cylinder barrels). Unless a more rational correction applies, temperatures of engine..., must be corrected by adding to them the difference between the maximum ambient atmospheric temperature...
14 CFR 29.1043 - Cooling tests.
Code of Federal Regulations, 2011 CFR
2011-01-01
... be of the minimum grade approved for the engines, and the mixture settings must be those used in... factor (except cylinder barrels). Unless a more rational correction applies, temperatures of engine..., must be corrected by adding to them the difference between the maximum ambient atmospheric temperature...
Ariel 6 measurements of ultra-heavy cosmic ray fluxes in the region Z or = 48
NASA Technical Reports Server (NTRS)
Fowler, P. H.; Masheder, M. R. W.; Moses, R. T.; Walker, R. N. F.; Worley, A.; Gay, A. M.
1985-01-01
For this re-analysis of the Ariel VI data, the contribution of non Z square effects to the restricted energy loss and to Cerenkov radiation in the Bristol sphere has been evaluated using the Mott cross section ratios and the non-relativistic Bloch correction. Results obtained were similar in form to those derived for HEAO3 but with maximum deviations approximately 10% rather than 15% for the Mott term, corresponding to a thinner detector. Because of the large uncertainties in the parameters involved, no relativistic Bloch term was included. In addition the experiments on the HEAO detector make the application of a correction to the Cerenkov response of doubtful justification and none was applied in this analysis. An energy dependent correction was made using an effective energy calculated from the vertical cut-off for a given event. The maximum value of this correction was about 0.6% in Z for low cut-offs, declining to approximately zero by 10 GV.
Seismic hazard analysis for Jayapura city, Papua
NASA Astrophysics Data System (ADS)
Robiana, R.; Cipta, A.
2015-04-01
Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock type and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 - 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.
Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.
ERIC Educational Resources Information Center
Ramsay, J. O.
1980-01-01
Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)
2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias
Saur, Sigrun; Frengen, Jomar
2008-07-01
Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.
NASA Astrophysics Data System (ADS)
Vale, Maria Goreti R.; Welz, Bernhard
2002-12-01
The literature on the determination of Tl in environmental samples using electrothermal atomization (ETA) and vaporization (ETV) techniques has been reviewed with special attention devoted to potential interferences and their control. Chloride interference, which is due to the formation of the volatile monochloride in the condensed phase, is the most frequently observed problem. Due to its high dissociation energy (88 kcal/mol), TlCl is difficult to dissociate in the gas phase and is easily lost. The best means of controlling this interference in ETA is atomization under isothermal conditions according to the stabilized temperature platform furnace concept, and the use of reduced palladium as a modifier. An alternative approach appears to be the 'fast furnace' concept, wherein both the use of a modifier and the pyrolysis stage are omitted. This concept requires an efficient background correction system, and high-resolution continuum-source atomic absorption spectrometry (HR-CS AAS) appears to offer the best results. This chloride interference can also cause significant problems when ETV techniques are used. Among the spectral interferences found in the determination of thallium are those due to Pd, the most efficient modifier, and Fe, which is frequently found at high concentrations in environmental samples. Both interferences are due to nearby atomic lines, and are observed only when deuterium background correction and relatively high atomization temperatures are used. A more serious spectral interference is that due to the molecular absorption spectrum of SO 2, which has a maximum around the Tl line and exhibits a pronounced rotational fine structure. HR-CS AAS again showed the best performance in coping with this interference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, Samuel L., E-mail: samuel.brady@stjude.org; Shulkin, Barry L.
2015-02-15
Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET imagesmore » were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.« less
NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; Manning, David; Donovan, Tim; Dix, Alan
2010-02-01
Aim: To investigate the impact on visual sampling strategy and pulmonary nodule recognition of image-based properties of background locations in dwelled regions where the first overt decision was made. . Background: Recent studies in mammography show that the first overt decision (TP or FP) has an influence on further image reading including the correctness of the following decisions. Furthermore, the correlation between the spatial frequency properties of the local background following decision sites and the first decision correctness has been reported. Methods: Subjects with different radiological experience were eye tracked during detection of pulmonary nodules from PA chest radiographs. Number of outcomes and the overall quality of performance are analysed in terms of the cases where correct or incorrect decisions were made. JAFROC methodology is applied. The spatial frequency properties of selected local backgrounds related to a certain decisions were studied. ANOVA was used to compare the logarithmic values of energy carried by non redundant stationary wavelet packet coefficients. Results: A strong correlation has been found between the number of TP as a first decision and the JAFROC score (r = 0.74). The number of FP as a first decision was found negatively correlated with JAFROC (r = -0.75). Moreover, the differential spatial frequency profiles outcomes depend on the first choice correctness.
Wagner, John H; Miskelly, Gordon M
2003-05-01
The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
A background correction algorithm for Van Allen Probes MagEIS electron flux measurements
Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...
2015-07-14
We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less
Demonstration of electronic design automation flow for massively parallel e-beam lithography
NASA Astrophysics Data System (ADS)
Brandt, Pieter; Belledent, Jérôme; Tranquillin, Céline; Figueiro, Thiago; Meunier, Stéfanie; Bayle, Sébastien; Fay, Aurélien; Milléquant, Matthieu; Icard, Beatrice; Wieland, Marco
2014-07-01
For proximity effect correction in 5 keV e-beam lithography, three elementary building blocks exist: dose modulation, geometry (size) modulation, and background dose addition. Combinations of these three methods are quantitatively compared in terms of throughput impact and process window (PW). In addition, overexposure in combination with negative bias results in PW enhancement at the cost of throughput. In proximity effect correction by over exposure (PEC-OE), the entire layout is set to fixed dose and geometry sizes are adjusted. In PEC-dose to size (DTS) both dose and geometry sizes are locally optimized. In PEC-background (BG), a background is added to correct the long-range part of the point spread function. In single e-beam tools (Gaussian or Shaped-beam), throughput heavily depends on the number of shots. In raster scan tools such as MAPPER Lithography's FLX 1200 (MATRIX platform) this is not the case and instead of pattern density, the maximum local dose on the wafer is limiting throughput. The smallest considered half-pitch is 28 nm, which may be considered the 14-nm node for Metal-1 and the 10-nm node for the Via-1 layer, achieved in a single exposure with e-beam lithography. For typical 28-nm-hp Metal-1 layouts, it was shown that dose latitudes (size of process window) of around 10% are realizable with available PEC methods. For 28-nm-hp Via-1 layouts this is even higher at 14% and up. When the layouts do not reach the highest densities (up to 10∶1 in this study), PEC-BG and PEC-OE provide the capability to trade throughput for dose latitude. At the highest densities, PEC-DTS is required for proximity correction, as this method adjusts both geometry edges and doses and will reduce the dose at the densest areas. For 28-nm-hp lines critical dimension (CD), hole&dot (CD) and line ends (edge placement error), the data path errors are typically 0.9, 1.0 and 0.7 nm (3σ) and below, respectively. There is not a clear data path performance difference between the investigated PEC methods. After the simulations, the methods were successfully validated in exposures on a MAPPER pre-alpha tool. A 28-nm half pitch Metal-1 and Via-1 layouts show good performance in resist that coincide with the simulation result. Exposures of soft-edge stitched layouts show that beam-to-beam position errors up to ±7 nm specified for FLX 1200 show no noticeable impact on CD. The research leading to these results has been performed in the frame of the industrial collaborative consortium IMAGINE.
Publisher Correction: Cluster richness-mass calibration with cosmic microwave background lensing
NASA Astrophysics Data System (ADS)
Geach, James E.; Peacock, John A.
2018-03-01
Owing to a technical error, the `Additional information' section of the originally published PDF version of this Letter incorrectly gave J.A.P. as the corresponding author; it should have read J.E.G. This has now been corrected. The HTML version is correct.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, L. M.; Balasubramaniam, K. S., E-mail: lwinter@aer.com
We present an alternate method of determining the progression of the solar cycle through an analysis of the solar X-ray background. Our results are based on the NOAA Geostationary Operational Environmental Satellites (GOES) X-ray data in the 1-8 Å band from 1986 to the present, covering solar cycles 22, 23, and 24. The X-ray background level tracks the progression of the solar cycle through its maximum and minimum. Using the X-ray data, we can therefore make estimates of the solar cycle progression and the date of solar maximum. Based upon our analysis, we conclude that the Sun reached its hemisphere-averagedmore » maximum in solar cycle 24 in late 2013. This is within six months of the NOAA prediction of a maximum in spring 2013.« less
New type of hill-top inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barvinsky, A.O.; Department of Physics, Tomsk State University,Lenin Ave. 36, Tomsk 634050; Department of Physics and Astronomy, Pacific Institue for Theoretical Physics,University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1
2016-01-20
We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ϵ and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less
New type of hill-top inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barvinsky, A.O.; Nesterov, D.V.; Kamenshchik, A.Yu., E-mail: barvin@td.lpi.ru, E-mail: Alexander.Kamenshchik@bo.infn.it, E-mail: nesterov@td.lpi.ru
2016-01-01
We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ε and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
76 FR 56949 - Biomass Crop Assistance Program; Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-15
.... ACTION: Interim rule; correction. SUMMARY: The Commodity Credit Corporation (CCC) is amending the Biomass... funds in favor of the ``project area'' portion of BCAP. CCC is also correcting errors in the regulation... INFORMATION: Background CCC published a final rule on October 27, 2010 (75 FR 66202-66243) implementing BCAP...
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.
Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M
2010-03-15
A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Correlation of Descriptive Analysis and Instrumental Puncture Testing of Watermelon Cultivars.
Shiu, J W; Slaughter, D C; Boyden, L E; Barrett, D M
2016-06-01
The textural properties of 5 seedless watermelon cultivars were assessed by descriptive analysis and the standard puncture test using a hollow probe with increased shearing properties. The use of descriptive analysis methodology was an effective means of quantifying watermelon sensory texture profiles for characterizing specific cultivars' characteristics. Of the 10 cultivars screened, 71% of the variation in the sensory attributes was measured using the 1st 2 principal components. Pairwise correlation of the hollow puncture probe and sensory parameters determined that initial slope, maximum force, and work after maximum force measurements all correlated well to the sensory attributes crisp and firm. These findings confirm that maximum force correlates well with not only firmness in watermelon, but crispness as well. The initial slope parameter also captures the sensory crispness of watermelon, but is not as practical to measure in the field as maximum force. The work after maximum force parameter is thought to reflect cellular arrangement and membrane integrity that in turn impact sensory firmness and crispness. Watermelon cultivar types were correctly predicted by puncture test measurements in heart tissue 87% of the time, although descriptive analysis was correct 54% of the time. © 2016 Institute of Food Technologists®
Dispersion durations of P-wave and QT interval in children treated with a ketogenic diet.
Doksöz, Önder; Güzel, Orkide; Yılmaz, Ünsal; Işgüder, Rana; Çeleğen, Kübra; Meşe, Timur
2014-04-01
Limited data are available on the effects of a ketogenic diet on dispersion duration of P-wave and QT-interval measures in children. We searched for the changes in these measures with serial electrocardiograms in patients treated with a ketogenic diet. Twenty-five drug-resistant patients with epilepsy treated with a ketogenic diet were enrolled in this study. Electrocardiography was performed in all patients before the beginning and at the sixth month after implementation of the ketogenic diet. Heart rate, maximum and minimum P-wave duration, P-wave dispersion, and maximum and minimum corrected QT interval and QT dispersion were manually measured from the 12-lead surface electrocardiogram. Minimum and maximum corrected QT and QT dispersion measurements showed nonsignificant increase at month 6 compared with baseline values. Other previously mentioned electrocardiogram parameters also showed no significant changes. A ketogenic diet of 6 months' duration has no significant effect on electrocardiogram parameters in children. Further studies with larger samples and longer duration of follow-up are needed to clarify the effects of ketogenic diet on P-wave dispersion and corrected QT and QT dispersion. Copyright © 2014 Elsevier Inc. All rights reserved.
Electron fluence correction factors for various materials in clinical electron beams.
Olivares, M; DeBlois, F; Podgorsak, E B; Seuntjens, J P
2001-08-01
Relative to solid water, electron fluence correction factors at the depth of dose maximum in bone, lung, aluminum, and copper for nominal electron beam energies of 9 MeV and 15 MeV of the Clinac 18 accelerator have been determined experimentally and by Monte Carlo calculation. Thermoluminescent dosimeters were used to measure depth doses in these materials. The measured relative dose at dmax in the various materials versus that of solid water, when irradiated with the same number of monitor units, has been used to calculate the ratio of electron fluence for the various materials to that of solid water. The beams of the Clinac 18 were fully characterized using the EGS4/BEAM system. EGSnrc with the relativistic spin option turned on was used to optimize the primary electron energy at the exit window, and to calculate depth doses in the five phantom materials using the optimized phase-space data. Normalizing all depth doses to the dose maximum in solid water stopping power ratio corrected, measured depth doses and calculated depth doses differ by less than +/- 1% at the depth of dose maximum and by less than 4% elsewhere. Monte Carlo calculated ratios of doses in each material to dose in LiF were used to convert the TLD measurements at the dose maximum into dose at the center of the TLD in the phantom material. Fluence perturbation correction factors for a LiF TLD at the depth of dose maximum deduced from these calculations amount to less than 1% for 0.15 mm thick TLDs in low Z materials and are between 1% and 3% for TLDs in Al and Cu phantoms. Electron fluence ratios of the studied materials relative to solid water vary between 0.83+/-0.01 and 1.55+/-0.02 for materials varying in density from 0.27 g/cm3 (lung) to 8.96 g/cm3 (Cu). The difference in electron fluence ratios derived from measurements and calculations ranges from -1.6% to +0.2% at 9 MeV and from -1.9% to +0.2% at 15 MeV and is not significant at the 1sigma level. Excluding the data for Cu, electron fluence correction factors for open electron beams are approximately proportional to the electron density of the phantom material and only weakly dependent on electron beam energy.
Effect of Background Pressure on the Performance and Plume of the HiVHAc Hall Thruster
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Kamhawi, Hani; Haag, Thomas
2013-01-01
During the Single String Integration Test of the NASA HiVHAc Hall thruster, a number of plasma diagnostics were implemented to study the effect of varying facility background pressure on thruster operation. These diagnostics include thrust stand, Faraday probe, ExB probe, and retarding potential analyzer. The test results indicated a rise in thrust and discharge current with background pressure. There was also a decrease in ion energy per charge, an increase in multiply-charged species production, a decrease in plume divergence, and a decrease in ion beam current with increasing background pressure. A simplified ingestion model was applied to determine the maximum acceptable background pressure for thrust measurement. The maximum acceptable ingestion percentage was found to be around 1%. Examination of the diagnostics results suggest the ionization and acceleration zones of the thruster were shifting upstream with increasing background pressure.
Annoyance caused by propeller airplane flyover noise
NASA Technical Reports Server (NTRS)
Mccurdy, D. A.; Powell, C. A.
1984-01-01
Laboratory experiments were conducted to provide information on quantifying the annoyance response of people to propeller airplane noise. The items of interest were current noise metrics, tone corrections, duration corrections, critical band corrections, and the effects of engine type, operation type, maximum takeoff weight, blade passage frequency, and blade tip speed. In each experiment, 64 subjects judged the annoyance of recordings of propeller and jet airplane operations presented at d-weighted sound pressure levels of 70, 80, and 90 dB in a testing room which simulates the outdoor acoustic environment. The first experiment examined 11 propeller airplanes with maximum takeoff weights greater than or equal to 5700 kg. The second experiment examined 14 propeller airplanes weighting 5700 kg or less. Five jet airplanes were included in each experiment. For both the heavy and light propeller airplanes, perceived noise level and perceived level (Stevens Mark VII procedure) predicted annoyance better than other current noise metrics.
Efficiency at maximum power of a chemical engine.
Hooyberghs, Hans; Cleuren, Bart; Salazar, Alberto; Indekeu, Joseph O; Van den Broeck, Christian
2013-10-07
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power η(mp) [corrected] takes the form 1/2+cΔμ+O(Δμ(2)), with 1∕2 a universal constant and Δμ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in η(mp) [corrected] is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model, we obtain η(mp) = 1/(θ + 1) [corrected], with θ > 0 the power of Δμ in the transport equation.
Real-Time Microfluidic Blood-Counting System for PET and SPECT Preclinical Pharmacokinetic Studies.
Convert, Laurence; Lebel, Réjean; Gascon, Suzanne; Fontaine, Réjean; Pratte, Jean-François; Charette, Paul; Aimez, Vincent; Lecomte, Roger
2016-09-01
Small-animal nuclear imaging modalities have become essential tools in the development process of new drugs, diagnostic procedures, and therapies. Quantification of metabolic or physiologic parameters is based on pharmacokinetic modeling of radiotracer biodistribution, which requires the blood input function in addition to tissue images. Such measurements are challenging in small animals because of their small blood volume. In this work, we propose a microfluidic counting system to monitor rodent blood radioactivity in real time, with high efficiency and small detection volume (∼1 μL). A microfluidic channel is built directly above unpackaged p-i-n photodiodes to detect β-particles with maximum efficiency. The device is embedded in a compact system comprising dedicated electronics, shielding, and pumping unit controlled by custom firmware to enable measurements next to small-animal scanners. Data corrections required to use the input function in pharmacokinetic models were established using calibrated solutions of the most common PET and SPECT radiotracers. Sensitivity, dead time, propagation delay, dispersion, background sensitivity, and the effect of sample temperature were characterized. The system was tested for pharmacokinetic studies in mice by quantifying myocardial perfusion and oxygen consumption with (11)C-acetate (PET) and by measuring the arterial input function using (99m)TcO4 (-) (SPECT). Sensitivity for PET isotopes reached 20%-47%, a 2- to 10-fold improvement relative to conventional catheter-based geometries. Furthermore, the system detected (99m)Tc-based SPECT tracers with an efficiency of 4%, an outcome not possible through a catheter. Correction for dead time was found to be unnecessary for small-animal experiments, whereas propagation delay and dispersion within the microfluidic channel were accurately corrected. Background activity and sample temperature were shown to have no influence on measurements. Finally, the system was successfully used in animal studies. A fully operational microfluidic blood-counting system for preclinical pharmacokinetic studies was developed. Microfluidics enabled reliable and high-efficiency measurement of the blood concentration of most common PET and SPECT radiotracers with high temporal resolution in small blood volume. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Technical Reports Server (NTRS)
Wallner, Lewis E.; Saari, Martin J.
1948-01-01
As part of an investigation of the performance and operational characteristics of the axial-flow gas turbine-propeller engine, conducted in the Cleveland altitude wind tunnel, the performance characteristics of the compressor and the turbine were obtained. The data presented were obtained at a compressor-inlet ram-pressure ratio of 1.00 for altitudes from 5000 to 35,000 feet, engine speeds from 8000 to 13,000 rpm, and turbine-inlet temperatures from 1400 to 2100 R. The highest compressor pressure ratio obtained was 6.15 at a corrected air flow of 23.7 pounds per second and a corrected turbine-inlet temperature of 2475 R. Peak adiabatic compressor efficiencies of about 77 percent were obtained near the value of corrected air flow corresponding to a corrected engine speed of 13,000 rpm. This maximum efficiency may be somewhat low, however, because of dirt accumulations on the compressor blades. A maximum adiabatic turbine efficiency of 81.5 percent was obtained at rated engine speed for all altitudes and turbine-inlet temperatures investigated.
NASA Technical Reports Server (NTRS)
Wallner, Lewis E.; Saari, Martin J.
1947-01-01
As part of an investigation of the performance and operational characteristics of the TG-100A gas turbine-propeller engine, conducted in the Cleveland altitude wind tunnel, the performance characteristics of the compressor and the turbine were obtained. The data presented were obtained at a compressor-inlet ram-pressure ratio of 1.00 for altitudes from 5000 to 35,000 feet, engine speeds from 8000 to 13,000 rpm, and turbine-inlet temperatures from 1400 to 2100R. The highest compressor pressure ratio was 6.15 at a corrected air flow of 23.7 pounds per second and a corrected turbine-inlet temperature of 2475R. Peak adiabatic compressor efficiencies of about 77 percent were obtained near the value of corrected air flow corresponding to a corrected engine speed of 13,000 rpm. This maximum efficiency may be somewhat low, however, because of dirt accumulations on the compressor blades. A maximum adiabatic turbine efficiency of 81.5 percent was obtained at rated engine speed for all altitudes and turbine-inlet temperatures investigated.
Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.
2017-01-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335
An evaluation of percentile and maximum likelihood estimators of weibull paremeters
Stanley J. Zarnoch; Tommy R. Dell
1985-01-01
Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...
Knowledge of appropriate acetaminophen doses and potential toxicities in an adult clinic population.
Stumpf, Janice L; Skyles, Amy J; Alaniz, Cesar; Erickson, Steven R
2007-01-01
To evaluate the knowledge of appropriate doses and potential toxicities of acetaminophen and assess the ability to recognize products containing acetaminophen in an adult outpatient setting. Cross-sectional, prospective study. University adult general internal medicine (AGIM) clinic. 104 adult patients presenting to the clinic over consecutive weekdays in December 2003. Three-page, written questionnaire. Ability of patients to identify maximum daily doses and potential toxicities of acetaminophen and recognize products that contain acetaminophen. A large percentage of participants (68.3%) reported pain on a daily or weekly basis, and 78.9% reported use of acetaminophen in the past 6 months. Only 2 patients correctly identified the maximum daily dose of regular acetaminophen, and just 3 correctly identified the maximum dose of extra-strength acetaminophen. Furthermore, 28 patients were unsure of the maximum dose of either product. Approximately 63% of participants either had not received or were unsure whether information on the possible danger of high doses of acetaminophen had been previously provided to them. When asked to identify potential problems associated with high doses of acetaminophen, 43.3% of patients noted the liver would be affected. The majority of the patients (71.2%) recognized Tylenol as containing acetaminophen, but fewer than 15% correctly identified Vicodin, Darvocet, Tylox, Percocet, and Lorcet as containing acetaminophen. Although nearly 80% of this AGIM population reported recent acetaminophen use, their knowledge of the maximum daily acetaminophen doses and potential toxicities associated with higher doses was poor and appeared to be independent of education level, age, and race. This indicates a need for educational efforts to all patients receiving acetaminophen-containing products, especially since the ability to recognize multi-ingredient products containing acetaminophen was likewise poor.
Intensity inhomogeneity correction for magnetic resonance imaging of human brain at 7T.
Uwano, Ikuko; Kudo, Kohsuke; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Harada, Taisuke; Ogawa, Akira; Sasaki, Makoto
2014-02-01
To evaluate the performance and efficacy for intensity inhomogeneity correction of various sequences of the human brain in 7T MRI using the extended version of the unified segmentation algorithm. Ten healthy volunteers were scanned with four different sequences (2D spin echo [SE], 3D fast SE, 2D fast spoiled gradient echo, and 3D time-of-flight) by using a 7T MRI system. Intensity inhomogeneity correction was performed using the "New Segment" module in SPM8 with four different values (120, 90, 60, and 30 mm) of full width at half maximum (FWHM) in Gaussian smoothness. The uniformity in signals in the entire white matter was evaluated using the coefficient of variation (CV); mean signal intensities between the subcortical and deep white matter were compared, and contrast between subcortical white matter and gray matter was measured. The length of the lenticulostriate (LSA) was measured on maximum intensity projection (MIP) images in the original and corrected images. In all sequences, the CV decreased as the FWHM value decreased. The differences of mean signal intensities between subcortical and deep white matter also decreased with smaller FWHM values. The contrast between white and gray matter was maintained at all FWHM values. LSA length was significantly greater in corrected MIP than in the original MIP images. Intensity inhomogeneity in 7T MRI can be successfully corrected using SPM8 for various scan sequences.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
Zeynalov, Reşad; Ağır, İsmail; Akgülle, Ahmet Hamdi; Kocaoğlu, Barış; Yalçın, Mithat Selim
2015-07-01
The aim of this study was to evaluate the holding strength of cannulated screw with multiple holes on threaded area, supported with PMMA in femoral head. A total of 48 human femoral heads were divided into two groups after mineral density measurement with Q-CT. Seven-millimeter cannulated screws with multiple holes on threaded area supported with PMMA were used in the study group, while in the control group standard 7-mm cannulated screws were used. Each group was divided into three subgroups with eight femoral heads. Mineral density of each subgroup was equal to the other. Groups were compared in terms of pull-out, maximum extraction torque and cut-out. In pull-out group, maximum holding strength (N) was measured, while axial pull-out of 0.5 mm/sec applied with Instron. Results showed meaningful significant difference (p < 0.011) between two groups. In cut-out group, femoral heads were placed into Instron and loading was started from 5 N at 2 mm per minute at first, and it was continued until a failure, at least 5 mm, of implant was observed. Results showed significant difference (p < 0.05) between two groups. In maximum extraction group, 4° per second reverse torque (Nm) was applied with torque meter. Highest torque value was measured during extraction time, and results showed very significant difference (p < 0. 001) between two groups. The results of our new design of cannulated screw augmented with PMMA provided background data to clinical application.
An evaluation of atmospheric corrections to advanced very high resolution radiometer data
Meyer, David; Hood, Joy J.
1993-01-01
A data set compiled to analyze vegetation indices is used to evaluate the effect of atmospheric correction to AVHRR measurement in the solar spectrum. Such corrections include cloud screening and "clear sky" corrections. We used the "clouds from AVHRR" (CLAVR) method for cloud detection and evaluated its performance over vegetated targets. Clear sky corrections, designed to reduce the effects of molecular scattering and absorption due to ozone, water vapor, carbon dioxide, and molecular oxygen, were applied to data values determine to be cloud free. Generally, it was found that the screening and correction of the AVHRR data did not affect the maximum NDVI compositing process adversely, while at the same time improving estimates of the land-surface radiances over a compositing period.
Francescon, P; Kilby, W; Noll, J M; Masi, L; Satariano, N; Russo, S
2017-02-07
Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between -6.1% and -3.5% for the diode models evaluated, while in a 7.6 mm × 7.7 mm MLC field these are -4.5% to -1.8%. The corresponding microchamber corrections are +9.9% to +10.7% and +3.5% to +4.0%. The microdiamond corrections have a maximum of -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are <1% in all beams. Measured OF showed uncorrected inter-detector differences >15%, reducing to <3% after correction. PDD corrections at d > d max were <2% for all detectors except IBA Razor where a maximum 4% correction was observed at 300 mm depth. OAR corrections were smaller inside the field than outside. At the beam edge microchamber OAR corrections were up to 15%, mainly caused by density perturbations, which blurs the measured penumbra. With larger beams and depths, PTW and IBA diode corrections outside the beam were up to 20% while the Edge detector needed smaller corrections although these did vary with orientation. These effects are most noticeable for large field size and depth, where they are dominated by fluence and stopping power perturbations. The microdiamond OAR corrections were <3% outside the beam. This paper provides OF corrections that can be used for commissioning new CyberKnife M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.
NASA Astrophysics Data System (ADS)
Francescon, P.; Kilby, W.; Noll, J. M.; Masi, L.; Satariano, N.; Russo, S.
2017-02-01
Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between -6.1% and -3.5% for the diode models evaluated, while in a 7.6 mm × 7.7 mm MLC field these are -4.5% to -1.8%. The corresponding microchamber corrections are +9.9% to +10.7% and +3.5% to +4.0%. The microdiamond corrections have a maximum of -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are <1% in all beams. Measured OF showed uncorrected inter-detector differences >15%, reducing to <3% after correction. PDD corrections at d > d max were <2% for all detectors except IBA Razor where a maximum 4% correction was observed at 300 mm depth. OAR corrections were smaller inside the field than outside. At the beam edge microchamber OAR corrections were up to 15%, mainly caused by density perturbations, which blurs the measured penumbra. With larger beams and depths, PTW and IBA diode corrections outside the beam were up to 20% while the Edge detector needed smaller corrections although these did vary with orientation. These effects are most noticeable for large field size and depth, where they are dominated by fluence and stopping power perturbations. The microdiamond OAR corrections were <3% outside the beam. This paper provides OF corrections that can be used for commissioning new CyberKnife M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.
Mandibular kinematic changes after unilateral cross-bite with lateral shift correction.
Venancio, F; Alarcon, J A; Lenguas, L; Kassem, M; Martin, C
2014-10-01
The aim of this randomised prospective study was to evaluate the effects of slow maxillary expansion with expansion plates and Hyrax expanders on the kinematics of the mandible after cross-bite correction. Thirty children (15 boys and 15 girls), aged 7·1-11·8, with unilateral cross-bite and functional shift were divided into two groups: expansion plate (n = 15) and Hyrax expander (n = 15). Thirty children with normal occlusion (14 boys and 16 girls, aged 7·3-11·6) served as control group. The maximum vertical opening, lateral mandibular shift (from maximum vertical opening to maximum intercuspation, from rest position to maximum intercuspation and from maximum vertical opening to rest position) and lateral excursions were recorded before and 4 months after treatment. After treatment, the expansion plate group showed a greater lateral shift from rest position to maximum intercuspation than did the control group. The expansion plate patients also presented greater left/contralateral excursion than did the control group. Comparisons of changes after treatment in the cross-bite groups showed significant decreases in the lateral shift from the maximum vertical opening to maximum intercuspation and from the maximum vertical opening to rest position, a significant increase in the homolateral excursion and a significant decrease in the contralateral excursion in the Hyrax expander group, whereas no significant differences were found in the expansion plate group. In conclusion, the Hyrax expander showed better results than did the expansion plate. The Hyrax expander with acrylic occlusal covering significantly improved the mandibular lateral shift and normalised the range of lateral excursion. © 2014 John Wiley & Sons Ltd.
Holographic corrections to meson scattering amplitudes
NASA Astrophysics Data System (ADS)
Armoni, Adi; Ireson, Edwin
2017-06-01
We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.
Calibration of entrance dose measurement for an in vivo dosimetry programme.
Ding, W; Patterson, W; Tremethick, L; Joseph, D
1995-11-01
An increasing number of cancer treatment centres are using in vivo dosimetry as a quality assurance tool for verifying dosimetry as either the entrance or exit surface of the patient undergoing external beam radiotherapy. Equipment is usually limited to either thermoluminescent dosimeters (TLD) or semiconductor detectors such as p-type diodes. The semiconductor detector is more popular than the TLD due to the major advantage of real time analysis of the actual dose delivered. If a discrepancy is observed between the calculated and the measured entrance dose, it is possible to eliminate several likely sources of errors by immediately verifying all treatment parameters. Five Scanditronix EDP-10 p-type diodes were investigated to determine their calibration and relevant correction factors for entrance dose measurements using a Victoreen White Water-RW3 tissue equivalent phantom and a 6 MV photon beam from a Varian Clinac 2100C linear accelerator. Correction factors were determined for individual diodes for the following parameters: source to surface distance (SSD), collimator size, wedge, plate (tray) and temperature. The directional dependence of diode response was also investigated. The SSD correction factor (CSSD) was found to increase by approximately 3% over the range of SSD from 80 to 130 cm. The correction factor for collimator size (Cfield) also varied by approximately 3% between 5 x 5 and 40 x 40 cm2. The wedge correction factor (Cwedge) and plate correction factor (Cplate) were found to be a function of collimator size. Over the range of measurement, these factors varied by a maximum of 1 and 1.5%, respectively. The Cplate variation between the solid and the drilled plates under the same irradiation conditions was a maximum of 2.4%. The diode sensitivity demonstrated an increase with temperature. A maximum of 2.5% variation for the directional dependence of diode response was observed for angle of +/- 60 degrees. In conclusion, in vivo dosimetry is an important and reliable method for checking the dose delivered to the patient. Preclinical calibration and determination of the relevant correction factors for each diode are essential in order to achieve a high accuracy of dose delivered to the patient.
ACT Reporting Category Interpretation Guide: Version 1.0. ACT Working Paper 2016 (05)
ERIC Educational Resources Information Center
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J.
2016-01-01
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
An Investigation of the Sample Performance of Two Nonnormality Corrections for RMSEA
ERIC Educational Resources Information Center
Brosseau-Liard, Patricia E.; Savalei, Victoria; Li, Libo
2012-01-01
The root mean square error of approximation (RMSEA) is a popular fit index in structural equation modeling (SEM). Typically, RMSEA is computed using the normal theory maximum likelihood (ML) fit function. Under nonnormality, the uncorrected sample estimate of the ML RMSEA tends to be inflated. Two robust corrections to the sample ML RMSEA have…
75 FR 27925 - Use of Turkey Shackle in Bar-Type Cut Operations; Correcting Amendment
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-19
...-0045] Use of Turkey Shackle in Bar-Type Cut Operations; Correcting Amendment AGENCY: Food Safety and... the required shackle width for Bar-type cut turkey operations that use J-type cut maximum line speeds... provides that turkey slaughter establishments that open turkey carcasses with Bar-type cuts may operate at...
Automatic Detection of Preposition Errors in Learner Writing
ERIC Educational Resources Information Center
De Felice, Rachele; Pulman, Stephen
2009-01-01
In this article, we present an approach to the automatic correction of preposition errors in L2 English. Our system, based on a maximum entropy classifier, achieves average precision of 42% and recall of 35% on this task. The discussion of results obtained on correct and incorrect data aims to establish what characteristics of L2 writing prove…
2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saur, Sigrun; Frengen, Jomar; Department of Oncology and Radiotherapy, St. Olavs University Hospital, N-7006 Trondheim
Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scansmore » of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16x16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.« less
Complete NLO corrections to W+W+ scattering and its irreducible background at the LHC
NASA Astrophysics Data System (ADS)
Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu
2017-10-01
The process pp → μ +ν μ e+νejj receives several contributions of different orders in the strong and electroweak coupling constants. Using appropriate event selections, this process is dominated by vector-boson scattering (VBS) and has recently been measured at the LHC. It is thus of prime importance to estimate precisely each contribution. In this article we compute for the first time the full NLO QCD and electroweak corrections to VBS and its irreducible background processes with realistic experimental cuts. We do not rely on approximations but use complete amplitudes involving two different orders at tree level and three different orders at one-loop level. Since we take into account all interferences, at NLO level the corrections to the VBS process and to the QCD-induced irreducible background process contribute at the same orders. Hence the two processes cannot be unambiguously distinguished, and all contributions to the μ +ν μ e+νejj final state should be preferably measured together.
NASA Astrophysics Data System (ADS)
Grozdanov, Tasko P.; Solov'ev, Evgeni A.
2018-04-01
Within the framework of dynamical adiabatic approach the hidden crossing theory of inelastic transitions is applied to charge exchange in H+ + He+(1 s) collisions in the wide range of center of mass collision energies E cm = (1.6 -70) keV. The good agreement with experiment and molecular close coupling calculations is obtained. At low energies our 4-state results are closest to the experiment and correctly reproduce the shoulder in energy dependence of the cross section around E cm = 6 keV. The 2-state results correctly predict the position of the maximum of the cross section at E cm ≈ 40 keV, whereas 4-state results fail to correctly describe the region around the maximum. The reason for this is the fact that adiabatic approximation for a given two-state hidden crossing is applicable for values of the Schtueckelberg parameter >1. But with increase of principal quantum number N the Schtueckelberg parameter decreases as N -3. That is why the 4-state approach involving higher excited states fails at smaller collision energies E cm ≈ 15 keV, while the 2-state approximation which involves low lying states can be extended to higher collision energies.
Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data
Agnese, R.
2015-03-30
We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in ourmore » data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.« less
Shellan, Jeffrey B
2004-08-01
The propagation of an optical beam through atmospheric turbulence produces wave-front aberrations that can reduce the power incident on an illuminated target or degrade the image of a distant target. The purpose of the work described here was to determine by computer simulation the statistical properties of the normalized on-axis intensity--defined as (D/r0)2 SR--as a function of D/r0 and the level of adaptive optics (AO) correction, where D is the telescope diameter, r0 is the Fried coherence diameter, and SR is the Strehl ratio. Plots were generated of (D/r0)2 (SR) and sigmaSR/(SR), where (SR) and sigma(SR) are the mean and standard deviation, respectively, of the SR versus D/r0 for a wide range of both modal and zonal AO correction. The level of modal correction was characterized by the number of Zernike radial modes that were corrected. The amount of zonal AO correction was quantified by the number of actuators on the deformable mirror and the resolution of the Hartmann wave-front sensor. These results can be used to determine the optimum telescope diameter, in units of r0, as a function of the AO design. For the zonal AO model, we found that maximum on-axis intensity was achieved when the telescope diameter was sized so that the actuator spacing was equal to approximately 2r0. For modal correction, we found that the optimum value of D/r0 (maximum mean on-axis intensity) was equal to 1.79Nr + 2.86, where Nr is the highest Zernike radial mode corrected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iwai, P; Lins, L Nadler
Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT ormore » IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.« less
Arima, Hideyuki; Yamato, Yu; Hasegawa, Tomohiko; Kobayashi, Sho; Yoshida, Go; Yasuda, Tatsuya; Banno, Tomohiro; Oe, Shin; Mihara, Yuki; Togawa, Daisuke; Matsuyama, Yukihiro
2017-10-01
Longitudinal cohort. The present study aimed to document changes in posture and lower extremity kinematics during gait in patients with adult spinal deformity (ASD) after extensive corrective surgery. Standing radiographic parameters are typically used to evaluate patients with ASD. Previously, preoperative walking and standing posture discrepancy were reported in patients with ASD. We did not include comparison between before and after surgery. Therefore, we thought that pre- and postoperative evaluations for patients with ASD should include gait analysis. Thirty-nine patients with ASD (5 men, 34 women; mean age, 71.0 ± 6.1) who underwent posterior corrective fixation surgeries from the thoracic spine to the pelvis were included. A 4-m walk was recorded and analyzed. Sagittal balance while walking was calculated as the angle between the plumb line on the side and the line connecting the greater trochanter and pinna while walking (i.e., the gait-trunk tilt angle [GTA]). We measured maximum knee extension angle during one gait cycle, step length (cm), and walking speed (m/min). Radiographic parameters were also measured. The mean GTA and the mean maximum knee extension angle significantly improved from 13.4° to 6.4°, and -13.3° to -9.4°(P < 0.001 and P = 0.006), respectively. The mean step length improved from 40.4 to 43.1 cm (P = 0.049), but there was no significant change in walking speed (38.4 to 41.5 m/min, P = 0.105). Postoperative GTA, maximum knee extension angle and step length correlated with postoperative pelvic incidence minus lumbar lordosis (r = 0.324, P = 0.044; r = -0.317, P = 0.049; r = -0.416, P = 0.008, respectively). Our results suggest that postoperative posture, maximum knee extension angle, and step length during gait in patients with ASD improved corresponding to how much correction of the sagittal spinal deformity was achieved. 3.
NASA Astrophysics Data System (ADS)
Camgoz, Nilgun; Yener, Cengiz
2002-06-01
In order to investigate preference responses for foreground- background color relationships, 85 university undergraduates in Ankara, Turkey, viewed 6 background colors (red, yellow, green, cyan, blue, and magenta) on which color squares of differing hues, saturations, and brightnesses were presented. All the background colors had maximum brightness (100%) and maximum saturation (100%). Subjects were asked to show the color square they preferred on the presented background color viewed through a computer monitor. The experimental setup consisted of a computer monitor located in a windowless room, illuminated with cove lighting. The findings of the experiment show that the brightness 100%- saturation 100% range is significantly preferred the most (p-value < 0.03). Thus, color squares that are most saturated and brightest are preferred on backgrounds of most saturated and brightest colors. Regardless of the background colors viewed, the subjects preferred blue the most (p-value < 0.01). Findings of the study are also discussed with pertinent research on the field. Through this analysis, an understanding of foreground-background color relationships in terms of preference is sought.
Adaptive optics for peripheral vision
NASA Astrophysics Data System (ADS)
Rosén, R.; Lundström, L.; Unsbo, P.
2012-07-01
Understanding peripheral optical errors and their impact on vision is important for various applications, e.g. research on myopia development and optical correction of patients with central visual field loss. In this study, we investigated whether correction of higher order aberrations with adaptive optics (AO) improve resolution beyond what is achieved with best peripheral refractive correction. A laboratory AO system was constructed for correcting peripheral aberrations. The peripheral low contrast grating resolution acuity in the 20° nasal visual field of the right eye was evaluated for 12 subjects using three types of correction: refractive correction of sphere and cylinder, static closed loop AO correction and continuous closed loop AO correction. Running AO in continuous closed loop improved acuity compared to refractive correction for most subjects (maximum benefit 0.15 logMAR). The visual improvement from aberration correction was highly correlated with the subject's initial amount of higher order aberrations (p = 0.001, R 2 = 0.72). There was, however, no acuity improvement from static AO correction. In conclusion, correction of peripheral higher order aberrations can improve low contrast resolution, provided refractive errors are corrected and the system runs in continuous closed loop.
Progress toward accurate high spatial resolution actinide analysis by EPMA
NASA Astrophysics Data System (ADS)
Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.
2010-12-01
High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation volume, as well as interferences from fluorescence at a distance from adjacent phases or distinct compositional domains in the same phase. Interference corrections for elements detected during boundary fluorescence are further complicated by X-ray focusing geometry considerations. Additional complications arise from the high current densities required for high spatial resolution and high count precision, such as fluctuations in internal charge distribution and peak shape changes as satellite production efficiency varies from calibration to analysis. No flawless method has yet emerged. Extreme care in interference corrections, especially where multiple and sometime mutual overlaps are present, and maximum care (and precision) in background characterization to account for interferences and curvature (e.g., WDS scan or multipoint regression), are crucial developments. Calibration curves from multiple peak and interference calibration measurements at different concentrations, and iterative software methodologies for incorporating absorption edge effects, and non-linearities in interference corrections due to peak shape changes and off-axis X-ray defocussing during boundary fluorescence at a distance, are directions with significant potential.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-07
... fatigue-related skin cracks and corrosion of the skin panel lap joints in the fuselage upper lobe, and... of corrosion, and related investigative and corrective actions. This AD reduces the maximum interval... and correct fatigue cracking and corrosion in the fuselage upper lobe skin lap joints, which could...
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher
2008-09-15
We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen
2017-07-27
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen
2017-01-01
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450
Hughes, C E; Cendón, D I; Harrison, J J; Hankin, S I; Johansen, M P; Payne, T E; Vine, M; Collins, R N; Hoffmann, E L; Loosz, T
2011-10-01
Between 1960 and 1968 low-level radioactive waste was buried in a series of shallow trenches near the Lucas Heights facility, south of Sydney, Australia. Groundwater monitoring carried out since the mid 1970s indicates that with the exception of tritium, no radioactivity above typical background levels has been detected outside the immediate vicinity of the trenches. The maximum tritium level detected in ground water was 390 kBq/L and the median value was 5400 Bq/L, decay corrected to the time of disposal. Since 1968, a plume of tritiated water has migrated from the disposal trenches and extends at least 100 m from the source area. Tritium in rainfall is negligible, however leachate from an adjacent and fill represents a significant additional tritium source. Study data indicate variation in concentration levels and plume distribution in response to wet and dry climatic periods and have been used to determine pathways for tritium migration through the subsurface.
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
A straightforward experimental method to evaluate the Lamb-Mössbauer factor of a 57Co/Rh source
NASA Astrophysics Data System (ADS)
Spina, G.; Lantieri, M.
2014-01-01
In analyzing Mössbauer spectra by means of the integral transmission function, a correct evaluation of the recoilless fs factor of the source at the position of the sample is needed. A novel method to evaluate fs for a 57Co source is proposed. The method uses the standard transmission experimental set up and it does not need further measurements but the ones that are mandatory in order to center the Mössbauer line and to calibrate the Mössbauer transducer. Firstly, the background counts are evaluated by collecting a standard Multi Channel Scaling (MCS) spectrum of a tick metal iron foil absorber and two Pulse Height Analysis (PHA) spectra with the same life-time and setting the maximum velocity of the transducer at the same value of the MCS spectrum. Secondly, fs is evaluated by fitting the collected MCS spectrum throughout the integral transmission approach. A test of the suitability of the technique is presented, too.
Identification of simple objects in image sequences
NASA Astrophysics Data System (ADS)
Geiselmann, Christoph; Hahn, Michael
1994-08-01
We present an investigation in the identification and location of simple objects in color image sequences. As an example the identification of traffic signs is discussed. Three aspects are of special interest. First regions have to be detected which may contain the object. The separation of those regions from the background can be based on color, motion, and contours. In the experiments all three possibilities are investigated. The second aspect focuses on the extraction of suitable features for the identification of the objects. For that purpose the border line of the region of interest is used. For planar objects a sufficient approximation of perspective projection is affine mapping. In consequence, it is near at hand to extract affine-invariant features from the border line. The investigation includes invariant features based on Fourier descriptors and moments. Finally, the object is identified by maximum likelihood classification. In the experiments all three basic object types are correctly identified. The probabilities for misclassification have been found to be below 1%
Seismic hazard analysis for Jayapura city, Papua
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robiana, R., E-mail: robiana-geo104@yahoo.com; Cipta, A.
Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock typemore » and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 – 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.« less
Correcting Borehole Temperture Profiles for the Effects of Postglacial Warming
NASA Astrophysics Data System (ADS)
Rath, V.; Gonzalez-Rouco, J. F.
2010-09-01
Though the investigation of observed borehole temperatures has proved to be a valuable tool for the reconstruction of ground surface temperature histories, there are many open questions concerning the signifcance and accuracy of the reconstructions from these data. In particular, the temperature signal of the warming after the Last glacial Maximum (LGM) is still present in borehole temperature proiles. It also influences the relatively shallow boreholes used in current paleoclimate inversions to estimate temperature changes in the last centuries. This is shown using Monte Carlo experiments on past surface temperature change, using plausible distributions for the most important parameters, i.e.,amplitude and timing of the glacial-interglacial transition, the prior average temperature, and petrophysical properties. It has been argued that the signature of the last glacial-interglacial transition could be responsible for the high amplitudes of millennial temperature reconstructions. However, in shallow boreholes the additional effect of past climate can reasonably approximated by a linear variation of temperature with depth, and thus be accommodated by a "biased" background heat flow. This is good news for borehole climatology. A simple correction based on subtracting an appropriate prior surface temperature history shows promising results reducing these errors considerably, in particular with deeper boreholes, where the warming signal in heat flow can no longer be approximated linearly. We will show examples from North America and Eurasia, comparing temperatures reduced the proposed algoritm with AOGCM modeling results.
Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen
2016-06-01
High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
49 CFR 325.73 - Microphone distance correction factors. 1
Code of Federal Regulations, 2011 CFR
2011-10-01
... factors. 1 1 Table 1, in § 325.7 is a tabulation of the maximum allowable sound level readings taking into... target point is other than 50 feet (15.2 m), the maximum observed sound level reading generated by the... observed sound level readings generated by the motor vehicle in accordance with § 325.59 of this part shall...
49 CFR 325.73 - Microphone distance correction factors. 1
Code of Federal Regulations, 2010 CFR
2010-10-01
... factors. 1 1 Table 1, in § 325.7 is a tabulation of the maximum allowable sound level readings taking into... target point is other than 50 feet (15.2 m), the maximum observed sound level reading generated by the... observed sound level readings generated by the motor vehicle in accordance with § 325.59 of this part shall...
Perimeter security for Minnesota correctional facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crist, D.; Spencer, D.D.
1996-12-31
For the past few years, the Minnesota Department of Corrections, assisted by Sandia National Laboratories, has developed a set of standards for perimeter security at medium, close, and maximum custody correctional facilities in the state. During this process, the threat to perimeter security was examined and concepts about correctional perimeter security were developed. This presentation and paper will review the outcomes of this effort, some of the lessons learned, and the concepts developed during this process and in the course of working with architects, engineers and construction firms as the state upgraded perimeter security at some facilities and planned newmore » construction at other facilities.« less
NASA Astrophysics Data System (ADS)
Cai, Zhijian; Zou, Wenlong; Wu, Jianhong
2017-10-01
Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.
NASA Astrophysics Data System (ADS)
Bates, Alan
2017-12-01
The measurement of the speed of sound in air with the resonance tube is a popular experiment that often yields accurate results. One approach is to hold a vibrating tuning fork over an air column that is partially immersed in water. The column is raised and lowered in the water until the generated standing wave produces resonance: this occurs at the point where sound is perceived to have maximum loudness, or at the point where the amplitude of the standing wave has maximum value, namely an antinode. An antinode coincides with the position of the tuning fork, beyond the end of the air column, which consequently introduces an end correction. One way to minimize this end correction is to measure the distance between consecutive antinodes.
SimArray: a user-friendly and user-configurable microarray design tool
Auburn, Richard P; Russell, Roslin R; Fischer, Bettina; Meadows, Lisa A; Sevillano Matilla, Santiago; Russell, Steven
2006-01-01
Background Microarrays were first developed to assess gene expression but are now also used to map protein-binding sites and to assess allelic variation between individuals. Regardless of the intended application, efficient production and appropriate array design are key determinants of experimental success. Inefficient production can make larger-scale studies prohibitively expensive, whereas poor array design makes normalisation and data analysis problematic. Results We have developed a user-friendly tool, SimArray, which generates a randomised spot layout, computes a maximum meta-grid area, and estimates the print time, in response to user-specified design decisions. Selected parameters include: the number of probes to be printed; the microtitre plate format; the printing pin configuration, and the achievable spot density. SimArray is compatible with all current robotic spotters that employ 96-, 384- or 1536-well microtitre plates, and can be configured to reflect most production environments. Print time and maximum meta-grid area estimates facilitate evaluation of each array design for its suitability. Randomisation of the spot layout facilitates correction of systematic biases by normalisation. Conclusion SimArray is intended to help both established researchers and those new to the microarray field to develop microarray designs with randomised spot layouts that are compatible with their specific production environment. SimArray is an open-source program and is available from . PMID:16509966
Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.
Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D
2018-04-07
We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Parker, L. N.; Zank, G. P.
2013-12-01
Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
Local sample thickness determination via scanning transmission electron microscopy defocus series.
Beyer, A; Straubinger, R; Belz, J; Volz, K
2016-05-01
The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Calculation of background effects on the VESUVIO eV neutron spectrometer
NASA Astrophysics Data System (ADS)
Mayers, J.
2011-01-01
The VESUVIO spectrometer at the ISIS pulsed neutron source measures the momentum distribution n(p) of atoms by 'neutron Compton scattering' (NCS). Measurements of n(p) provide a unique window into the quantum behaviour of atomic nuclei in condensed matter systems. The VESUVIO 6Li-doped neutron detectors at forward scattering angles were replaced in February 2008 by yttrium aluminium perovskite (YAP)-doped γ-ray detectors. This paper compares the performance of the two detection systems. It is shown that the YAP detectors provide a much superior resolution and general performance, but suffer from a sample-dependent gamma background. This report details how this background can be calculated and data corrected. Calculation is compared with data for two different instrument geometries. Corrected and uncorrected data are also compared for the current instrument geometry. Some indications of how the gamma background can be reduced are also given.
Flight-determined correction terms for angle of attack and sideslip
NASA Technical Reports Server (NTRS)
Shafer, M. F.
1982-01-01
The effects of local flow, upwash, and sidewash on angle of attack and sideslip (measured with boom-mounted vanes) were determined for subsonic, transonic, and supersonic flight using a maximum likelihood estimator. The correction terms accounting for these effects were determined using a series of maneuvers flown at a large number of flight conditions in both augmented and unaugmented control modes. The correction terms provide improved angle-of-attack and sideslip values for use in the estimation of stability and control derivatives. In addition to detailing the procedure used to determine these correction terms, this paper discusses various effects, such as those related to Mach number, on the correction terms. The use of maneuvers flown in augmented and unaugmented control modes is also discussed.
Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc
2015-10-01
Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality as it allows to noninvasively monitor the fluorescence targeted tumors located below the tissue surface. Some drawbacks of this technique are the background fluorescence decreasing the contrast and absorption heterogeneities leading to misinterpretations concerning fluorescence concentrations. We propose a correction technique based on a laser line scanning illumination scheme. We scan the medium with the laser line and acquire, at each position of the line, both fluorescence and excitation images. We then use the finding that there is a relationship between the excitation intensity profile and the background fluorescence one to predict the amount of signal to subtract from the fluorescence images to get a better contrast. As the light absorption information is contained both in fluorescence and excitation images, this method also permits us to correct the effects of absorption heterogeneities. This technique has been validated on simulations and experimentally. Fluorescent inclusions are observed in several configurations at depths ranging from 1 mm to 1 cm. Results obtained with this technique are compared with those obtained with a classical wide-field detection scheme for contrast enhancement and with the fluorescence by an excitation ratio approach for absorption correction.
Improved electron probe microanalysis of trace elements in quartz
Donovan, John J.; Lowers, Heather; Rusk, Brian G.
2011-01-01
Quartz occurs in a wide range of geologic environments throughout the Earth's crust. The concentration and distribution of trace elements in quartz provide information such as temperature and other physical conditions of formation. Trace element analyses with modern electron-probe microanalysis (EPMA) instruments can achieve 99% confidence detection of ~100 ppm with fairly minimal effort for many elements in samples of low to moderate average atomic number such as many common oxides and silicates. However, trace element measurements below 100 ppm in many materials are limited, not only by the precision of the background measurement, but also by the accuracy with which background levels are determined. A new "blank" correction algorithm has been developed and tested on both Cameca and JEOL instruments, which applies a quantitative correction to the emitted X-ray intensities during the iteration of the sample matrix correction based on a zero level (or known trace) abundance calibration standard. This iterated blank correction, when combined with improved background fit models, and an "aggregate" intensity calculation utilizing multiple spectrometer intensities in software for greater geometric efficiency, yields a detection limit of 2 to 3 ppm for Ti and 6 to 7 ppm for Al in quartz at 99% t-test confidence with similar levels for absolute accuracy.
Method for auto-alignment of digital optical phase conjugation systems based on digital propagation
Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei
2014-01-01
Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5∘, and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems. PMID:24977504
Method for auto-alignment of digital optical phase conjugation systems based on digital propagation.
Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei
2014-06-16
Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5° and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems.
The scaling of maximum and basal metabolic rates of mammals and birds
NASA Astrophysics Data System (ADS)
Barbosa, Lauro A.; Garcia, Guilherme J. M.; da Silva, Jafferson K. L.
2006-01-01
Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here, we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as M, maximum heart rate as M, and muscular capillary density as M, in agreement with data.
NASA Astrophysics Data System (ADS)
Alizadeh, M.; Schuh, H.; Schmidt, M. G.
2012-12-01
In the last decades Global Navigation Satellite System (GNSS) has turned into a promising tool for probing the ionosphere. The classical input data for developing Global Ionosphere Maps (GIM) is obtained from the dual-frequency GNSS observations. Simultaneous observations of GNSS code or carrier phase at each frequency is used to form a geometric-free linear combination which contains only the ionospheric refraction term and the differential inter-frequency hardware delays. To relate the ionospheric observable to the electron density, a model is used that represents an altitude-dependent distribution of the electron density. This study aims at developing a global multi-dimensional model of the electron density using simulated GNSS observations from about 150 International GNSS Service (IGS) ground stations. Due to the fact that IGS stations are in-homogenously distributed around the world and the accuracy and reliability of the developed models are considerably lower in the area not well covered with IGS ground stations, the International Reference Ionosphere (IRI) model has been used as a background model. The correction term is estimated by applying spherical harmonics expansion to the GNSS ionospheric observable. Within this study this observable is related to the electron density using different functions for the bottom-side and top-side ionosphere. The bottom-side ionosphere is represented by an alpha-Chapman function and the top-side ionosphere is represented using the newly proposed Vary-Chap function.aximum electron density, IRI background model (elec/m3), day 202 - 2010, 0 UT eight of maximum electron density, IRI background model (km), day 202 - 2010, 0 UT
Schaufele, Fred
2013-01-01
Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Fourier-space combination of Planck and Herschel images
NASA Astrophysics Data System (ADS)
Abreu-Vicente, J.; Stutz, A.; Henning, Th.; Keto, E.; Ballesteros-Paredes, J.; Robitaille, T.
2017-08-01
Context. Herschel has revolutionized our ability to measure column densities (NH) and temperatures (T) of molecular clouds thanks to its far infrared multiwavelength coverage. However, the lack of a well defined background intensity level in the Herschel data limits the accuracy of the NH and T maps. Aims: We aim to provide a method that corrects the missing Herschel background intensity levels using the Planck model for foreground Galactic thermal dust emission. For the Herschel/PACS data, both the constant-offset as well as the spatial dependence of the missing background must be addressed. For the Herschel/SPIRE data, the constant-offset correction has already been applied to the archival data so we are primarily concerned with the spatial dependence, which is most important at 250 μm. Methods: We present a Fourier method that combines the publicly available Planck model on large angular scales with the Herschel images on smaller angular scales. Results: We have applied our method to two regions spanning a range of Galactic environments: Perseus and the Galactic plane region around l = 11deg (HiGal-11). We post-processed the combined dust continuum emission images to generate column density and temperature maps. We compared these to previously adopted constant-offset corrections. We find significant differences (≳20%) over significant ( 15%) areas of the maps, at low column densities (NH ≲ 1022 cm-2) and relatively high temperatures (T ≳ 20 K). We have also applied our method to synthetic observations of a simulated molecular cloud to validate our method. Conclusions: Our method successfully corrects the Herschel images, including both the constant-offset intensity level and the scale-dependent background variations measured by Planck. Our method improves the previous constant-offset corrections, which did not account for variations in the background emission levels. The image FITS files used in this paper are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/604/A65
Prominent ears and their correction: a forty-year experience.
Georgiade, G S; Riefkohl, R; Georgiade, N G
1995-01-01
The technique described in this article correcting the protruding ear deformity has evolved over 40 years. The original procedures and our subsequent modifications are described, including 20-year followup results. The possible pitfalls in carrying out this procedure and how to avoid them are also described. A relatively standardized short procedure with minimal morbidity and maximum long-term results yields an aesthetically satisfactory looking ear.
Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł
2014-01-01
Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411
Surgical correction of pectus arcuatum
Ershova, Ksenia; Adamyan, Ruben
2016-01-01
Background Pectus arcuatum is a rear congenital chest wall deformity and methods of surgical correction are debatable. Methods Surgical correction of pectus arcuatum always includes one or more horizontal sternal osteotomies, resection of deformed rib cartilages and finally anterior chest wall stabilization. The study is approved by the institutional ethical committee and has obtained the informed consent from every patient. Results In this video we show our modification of pectus arcuatum correction with only partial sternal osteotomy and further stabilization by vertical parallel titanium plates. Conclusions Reported method is a feasible option for surgical correction of pectus arcuatum. PMID:29078483
77 FR 18914 - National Motor Vehicle Title Information System (NMVTIS): Technical Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-29
... 1121-AA79 National Motor Vehicle Title Information System (NMVTIS): Technical Corrections AGENCY... (OJP) is promulgating this direct final rule for its National Motor Vehicle Title Information System... INFORMATION CONTACT paragraph. II. Background The National Motor Vehicle Title Information System was...
Effects of ocular aberrations on contrast detection in noise.
Liang, Bo; Liu, Rong; Dai, Yun; Zhou, Jiawei; Zhou, Yifeng; Zhang, Yudong
2012-08-06
We use adaptive optics (AO) techniques to manipulate the ocular aberrations and elucidate the effects of these ocular aberrations on contrast detection in a noisy background. The detectability of sine wave gratings at frequencies of 4, 8, and 16 circles per degree (cpd) was measured in a standard two-interval force-choice staircase procedure against backgrounds of various levels of white noise. The observer's ocular aberrations were either corrected with AO or left uncorrected. In low levels of external noise, contrast detection thresholds are always lowered by AO correction, whereas in high levels of external noise, they are generally elevated by AO correction. Higher levels of external noise are required to make this threshold elevation observable when signal spatial frequencies increase from 4 to 16 cpd. The linear-amplifier-model fit shows that mostly sampling efficiency and equivalent noise both decrease with AO correction. Our findings indicate that ocular aberrations could be beneficial for contrast detection in high-level noises. The implications of these findings are discussed.
Comparison of Fixed-Stabilizer, Adjustable-Stabilizer and All-Moveable Horizontal Tails
1945-10-01
the thrust axis and wind direction at Infinity, degrees; primed to indicate that a is corrected for ground interference effects 5 angular ...deflection of control surface, degrees i+- maximum angular deflection of stabilizer measured with reference to thrust axis, degrees hnax...5e maximum negative angular deflection of elevator, degrees E downwash angle at teil, degrees; primed to indicate that e Is
McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy
2007-08-01
An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1 +/- 3.1% and -0.5 +/- 2.8% relative to the maximum of the intensity profiles. For the same target motion, the error was shown to increase rapidly as (1) the maximum MLC leaf velocity was reduced below 75% of the maximum target velocity and (2) the system response time was increased.
Electrovacuum solutions in nonlocal gravity
NASA Astrophysics Data System (ADS)
Fernandes, Karan; Mitra, Arpita
2018-05-01
We consider the coupling of the electromagnetic field to a nonlocal gravity theory comprising of the Einstein-Hilbert action in addition to a nonlocal R □-2R term associated with a mass scale m . We demonstrate that in the case of the minimally coupled electromagnetic field, real corrections about the Reissner-Nordström background only exist between the inner Cauchy horizon and the event horizon of the black hole. This motivates us to consider the modified coupling of electromagnetism to this theory via the Kaluza ansatz. The Kaluza reduction introduces nonlocal terms involving the electromagnetic field to the pure gravitational nonlocal theory. An iterative approach is provided to perturbatively solve the equations of motion to arbitrary order in m2 about any known solution of general relativity. We derive the first-order corrections and demonstrate that the higher order corrections are real and perturbative about the external background of a Reissner-Nordström black hole. We also discuss how the Kaluza reduced action, through the inclusion of nonlocal electromagnetic fields, could also be relevant in quantum effects on curved backgrounds with horizons.
40 CFR 1065.805 - Sampling system.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...
40 CFR 1065.805 - Sampling system.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...
40 CFR 1065.805 - Sampling system.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...
A Low-Noise Germanium Ionization Spectrometer for Low-Background Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aalseth, Craig E.; Colaresi, Jim; Collar, Juan I.
2016-12-01
Recent progress on the development of very low energy threshold high purity germanium ionization spectrometers has produced an instrument of 1.2 kg mass and excellent noise performance. The detector was installed in a low-background cryostat intended for use in a low mass, WIMP dark matter direct detection search. The integrated detector and low background cryostat achieved noise performance of 98 eV full-width half-maximum of an input electronic pulse generator peak and gamma-ray energy resolution of 1.9 keV full-width half-maximum at the 60Co gamma-ray energy of 1332 keV. This Transaction reports the thermal characterization of the low-background cryostat, specifications of themore » newly prepared 1.2 kg p-type point contact germanium detector, and the ionization spectroscopy – energy resolution and energy threshold – performance of the integrated system.« less
Dosimetric verification of IMRT treatment planning using Monte Carlo simulations for prostate cancer
NASA Astrophysics Data System (ADS)
Yang, J.; Li, J.; Chen, L.; Price, R.; McNeeley, S.; Qin, L.; Wang, L.; Xiong, W.; Ma, C.-M.
2005-03-01
The purpose of this work is to investigate the accuracy of dose calculation of a commercial treatment planning system (Corvus, Normos Corp., Sewickley, PA). In this study, 30 prostate intensity-modulated radiotherapy (IMRT) treatment plans from the commercial treatment planning system were recalculated using the Monte Carlo method. Dose-volume histograms and isodose distributions were compared. Other quantities such as minimum dose to the target (Dmin), the dose received by 98% of the target volume (D98), dose at the isocentre (Diso), mean target dose (Dmean) and the maximum critical structure dose (Dmax) were also evaluated based on our clinical criteria. For coplanar plans, the dose differences between Monte Carlo and the commercial treatment planning system with and without heterogeneity correction were not significant. The differences in the isocentre dose between the commercial treatment planning system and Monte Carlo simulations were less than 3% for all coplanar cases. The differences on D98 were less than 2% on average. The differences in the mean dose to the target between the commercial system and Monte Carlo results were within 3%. The differences in the maximum bladder dose were within 3% for most cases. The maximum dose differences for the rectum were less than 4% for all the cases. For non-coplanar plans, the difference in the minimum target dose between the treatment planning system and Monte Carlo calculations was up to 9% if the heterogeneity correction was not applied in Corvus. This was caused by the excessive attenuation of the non-coplanar beams by the femurs. When the heterogeneity correction was applied in Corvus, the differences were reduced significantly. These results suggest that heterogeneity correction should be used in dose calculation for prostate cancer with non-coplanar beam arrangements.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
NASA Astrophysics Data System (ADS)
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, G P; Logan, C M
We have estimated interference from external background radiation for a computed tomography (CT) scanner. Our intention is to estimate the interference that would be expected for the high-resolution SkyScan 1072 desk-top x-ray microtomography system. The SkyScan system uses a Microfocus x-ray source capable of a 10-{micro}m focal spot at a maximum current of 0.1 mA and a maximum energy of 130 kVp. All predictions made in this report assume using the x-ray source at the smallest spot size, maximum energy, and operating at the maximum current. Some of the systems basic geometry that is used for these estimates are: (1)more » Source-to-detector distance: 250 mm, (2) Minimum object-to-detector distance: 40 mm, and (3) Maximum object-to-detector distance: 230 mm. This is a first-order, rough estimate of the quantity of interference expected at the system detector caused by background radiation. The amount of interference is expressed by using the ratio of exposure expected at the detector of the CT system. The exposure values for the SkyScan system are determined by scaling the measured values of an x-ray source and the background radiation adjusting for the difference in source-to-detector distance and current. The x-ray source that was used for these measurements was not the SkyScan Microfocus x-ray tube. Measurements were made using an x-ray source that was operated at the same applied voltage but higher current for better statistics.« less
Normal-faulting slip maxima and stress-drop variability: a geological perspective
Hecker, S.; Dawson, T.E.; Schwartz, D.P.
2010-01-01
We present an empirical estimate of maximum slip in continental normal-faulting earthquakes and present evidence that stress drop in intraplate extensional environments is dependent on fault maturity. A survey of reported slip in historical earthquakes globally and in latest Quaternary paleoearthquakes in the Western Cordillera of the United States indicates maximum vertical displacements as large as 6–6.5 m. A difference in the ratio of maximum-to-mean displacements between data sets of prehistoric and historical earthquakes, together with constraints on bias in estimates of mean paleodisplacement, suggest that applying a correction factor of 1.4±0.3 to the largest observed displacement along a paleorupture may provide a reasonable estimate of the maximum displacement. Adjusting the largest paleodisplacements in our regional data set (~6 m) by a factor of 1.4 yields a possible upper-bound vertical displacement for the Western Cordillera of about 8.4 m, although a smaller correction factor may be more appropriate for the longest ruptures. Because maximum slip is highly localized along strike, if such large displacements occur, they are extremely rare. Static stress drop in surface-rupturing earthquakes in the Western Cordillera, as represented by maximum reported displacement as a fraction of modeled rupture length, appears to be larger on normal faults with low cumulative geologic displacement (<2 km) and larger in regions such as the Rocky Mountains, where immature, low-throw faults are concentrated. This conclusion is consistent with a growing recognition that structural development influences stress drop and indicates that this influence is significant enough to be evident among faults within a single intraplate environment.
PET attenuation correction for rigid MR Tx/Rx coils from 176Lu background activity
NASA Astrophysics Data System (ADS)
Lerche, Christoph W.; Kaltsas, Theodoris; Caldeira, Liliana; Scheins, Jürgen; Rota Kops, Elena; Tellmann, Lutz; Pietrzyk, Uwe; Herzog, Hans; Shah, N. Jon
2018-02-01
One challenge for PET-MR hybrid imaging is the correction for attenuation of the 511 keV annihilation radiation by the required RF transmit and/or RF receive coils. Although there are strategies for building PET transparent Tx/Rx coils, such optimised coils still cause significant attenuation of the annihilation radiation leading to artefacts and biases in the reconstructed activity concentrations. We present a straightforward method to measure the attenuation of Tx/Rx coils in simultaneous MR-PET imaging based on the natural 176Lu background contained in the scintillator of the PET detector without the requirement of an external CT scanner or PET scanner with transmission source. The method was evaluated on a prototype 3T MR-BrainPET produced by Siemens Healthcare GmbH, both with phantom studies and with true emission images from patient/volunteer examinations. Furthermore, the count rate stability of the PET scanner and the x-ray properties of the Tx/Rx head coil were investigated. Even without energy extrapolation from the two dominant γ energies of 176Lu to 511 keV, the presented method for attenuation correction, based on the measurement of 176Lu background attenuation, shows slightly better performance than the coil attenuation correction currently used. The coil attenuation correction currently used is based on an external transmission scan with rotating 68Ge sources acquired on a Siemens ECAT HR + PET scanner. However, the main advantage of the presented approach is its straightforwardness and ready availability without the need for additional accessories.
Corrections for the geometric distortion of the tube detectors on SANS instruments at ORNL
He, Lilin; Do, Changwoo; Qian, Shuo; ...
2014-11-25
Small-angle neutron scattering instruments at the Oak Ridge National Laboratory's High Flux Isotope Reactor were upgraded in area detectors from the large, single volume crossed-wire detectors originally installed to staggered arrays of linear position-sensitive detectors (LPSDs). The specific geometry of the LPSD array requires that approaches to data reduction traditionally employed be modified. Here, two methods for correcting the geometric distortion produced by the LPSD array are presented and compared. The first method applies a correction derived from a detector sensitivity measurement performed using the same configuration as the samples are measured. In the second method, a solid angle correctionmore » is derived that can be applied to data collected in any instrument configuration during the data reduction process in conjunction with a detector sensitivity measurement collected at a sufficiently long camera length where the geometric distortions are negligible. Furthermore, both methods produce consistent results and yield a maximum deviation of corrected data from isotropic scattering samples of less than 5% for scattering angles up to a maximum of 35°. The results are broadly applicable to any SANS instrument employing LPSD array detectors, which will be increasingly common as instruments having higher incident flux are constructed at various neutron scattering facilities around the world.« less
DEM interpolation weight calculation modulus based on maximum entropy
NASA Astrophysics Data System (ADS)
Chen, Tian-wei; Yang, Xia
2015-12-01
There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.
Quantitation of tumor uptake with molecular breast imaging.
Bache, Steven T; Kappadath, S Cheenu
2017-09-01
We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.
2014-10-15
Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less
14 CFR 29.143 - Controllability and maneuverability.
Code of Federal Regulations, 2010 CFR
2010-01-01
... occurs with maximum continuous power and critical weight. No corrective action time delay for any... pilot reaction time (whichever is greater); and (ii) For any other condition, normal pilot reaction time...
Komiskey, Matthew J.; Stuntebeck, Todd D.; Cox, Amanda L.; Frame, Dennis R.
2013-01-01
The effects of longitudinal slope on the estimation of discharge in a 0.762-meter (m) (depth at flume entrance) H flume were tested under controlled conditions with slopes from −8 to +8 percent and discharges from 1.2 to 323 liters per second. Compared to the stage-discharge rating for a longitudinal flume slope of zero, computed discharges were negatively biased (maximum −31 percent) when the flume was sloped downward from the front (entrance) to the back (exit), and positively biased (maximum 44 percent) when the flume was sloped upward. Biases increased with greater flume slopes and with lower discharges. A linear empirical relation was developed to compute a corrected reference stage for a 0.762-m H flume using measured stage and flume slope. The reference stage was then used to determine a corrected discharge from the stage-discharge rating. A dimensionally homogeneous correction equation also was developed, which could theoretically be used for all standard H-flume sizes. Use of the corrected discharge computation method for a sloped H flume was determined to have errors ranging from −2.2 to 4.6 percent compared to the H-flume measured discharge at a level position. These results emphasize the importance of the measurement of and the correction for flume slope during an edge-of-field study if the most accurate discharge estimates are desired.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro
We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.
The location and recognition of anti-counterfeiting code image with complex background
NASA Astrophysics Data System (ADS)
Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping
2017-07-01
The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.
Mrochen, Michael; Schelling, Urs; Wuellner, Christian; Donitzky, Christof
2009-02-01
To investigate the effect of temporal and spatial distributions of laser spots (scan sequences) on the corneal surface quality after ablation and the maximum ablation of a given refractive correction after photoablation with a high-repetition-rate scanning-spot laser. IROC AG, Zurich, Switzerland, and WaveLight AG, Erlangen, Germany. Bovine corneas and poly(methyl methacrylate) (PMMA) plates were photoablated using a 1050 Hz excimer laser prototype for corneal laser surgery. Four temporal and spatial spot distributions (scan sequences) with different temporal overlapping factors were created for 3 myopic, 3 hyperopic, and 3 phototherapeutic keratectomy ablation profiles. Surface quality and maximum ablation depth were measured using a surface profiling system. The surface quality factor increased (rough surfaces) as the amount of temporal overlapping in the scan sequence and the amount of correction increased. The rise in surface quality factor was less for bovine corneas than for PMMA. The scan sequence might cause systematic substructures at the surface of the ablated material depending on the overlapping factor. The maximum ablation varied within the scan sequence. The temporal and spatial distribution of the laser spots (scan sequence) during a corneal laser procedure affected the surface quality and maximum ablation depth of the ablation profile. Corneal laser surgery could theoretically benefit from smaller spot sizes and higher repetition rates. The temporal and spatial spot distributions are relevant to achieving these aims.
14 CFR 27.143 - Controllability and maneuverability.
Code of Federal Regulations, 2010 CFR
2010-01-01
... failure occurs with maximum continuous power and critical weight. No corrective action time delay for any... pilot reaction time (whichever is greater); and (ii) For any other condition, normal pilot reaction time...
Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...
2016-03-03
Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville
Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less
Quantum corrections for spinning particles in de Sitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröb, Markus B.; Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu
We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalarmore » Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.« less
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
ERIC Educational Resources Information Center
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siman, W.; Mikell, J. K.; Kappadath, S. C., E-mail
Purpose: To develop a practical background compensation (BC) technique to improve quantitative {sup 90}Y-bremsstrahlung single-photon emission computed tomography (SPECT)/computed tomography (CT) using a commercially available imaging system. Methods: All images were acquired using medium-energy collimation in six energy windows (EWs), ranging from 70 to 410 keV. The EWs were determined based on the signal-to-background ratio in planar images of an acrylic phantom of different thicknesses (2–16 cm) positioned below a {sup 90}Y source and set at different distances (15–35 cm) from a gamma camera. The authors adapted the widely used EW-based scatter-correction technique by modeling the BC as scaled images.more » The BC EW was determined empirically in SPECT/CT studies using an IEC phantom based on the sphere activity recovery and residual activity in the cold lung insert. The scaling factor was calculated from 20 clinical planar {sup 90}Y images. Reconstruction parameters were optimized in the same SPECT images for improved image quantification and contrast. A count-to-activity calibration factor was calculated from 30 clinical {sup 90}Y images. Results: The authors found that the most appropriate imaging EW range was 90–125 keV. BC was modeled as 0.53× images in the EW of 310–410 keV. The background-compensated clinical images had higher image contrast than uncompensated images. The maximum deviation of their SPECT calibration in clinical studies was lowest (<10%) for SPECT with attenuation correction (AC) and SPECT with AC + BC. Using the proposed SPECT-with-AC + BC reconstruction protocol, the authors found that the recovery coefficient of a 37-mm sphere (in a 10-mm volume of interest) increased from 39% to 90% and that the residual activity in the lung insert decreased from 44% to 14% over that of SPECT images with AC alone. Conclusions: The proposed EW-based BC model was developed for {sup 90}Y bremsstrahlung imaging. SPECT with AC + BC gave improved lesion detectability and activity quantification compared to SPECT with AC only. The proposed methodology can readily be used to tailor {sup 90}Y SPECT/CT acquisition and reconstruction protocols with different SPECT/CT systems for quantification and improved image quality in clinical settings.« less
47 CFR 74.705 - TV broadcast analog station protection.
Code of Federal Regulations, 2013 CFR
2013-10-01
... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...
47 CFR 74.705 - TV broadcast analog station protection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...
47 CFR 74.705 - TV broadcast analog station protection.
Code of Federal Regulations, 2014 CFR
2014-10-01
... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...
47 CFR 74.705 - TV broadcast analog station protection.
Code of Federal Regulations, 2012 CFR
2012-10-01
... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...
47 CFR 74.705 - TV broadcast analog station protection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... from the authorized maximum radiated power (without depression angle correction), the horizontal... application for a new UHF low power TV or TV translator construction permit, a change of channel, or a major...
48 CFR 552.219-73 - Goals for Subcontracting Plan.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...
48 CFR 552.219-73 - Goals for Subcontracting Plan.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...
48 CFR 552.219-73 - Goals for Subcontracting Plan.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...
48 CFR 552.219-73 - Goals for Subcontracting Plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...
Cosmic Strings Stabilized by Quantum Fluctuations
NASA Astrophysics Data System (ADS)
Weigel, H.
2017-03-01
Fermion quantum corrections to the energy of cosmic strings are computed. A number of rather technical tools are needed to formulate this correction, and isospin and gauge invariance are employed to verify consistency of these tools. These corrections must also be included when computing the energy of strings that are charged by populating fermion bound states in its background. It is found that charged strings are dynamically stabilized in theories similar to the standard model of particle physics.
Observational constraints on loop quantum cosmology.
Bojowald, Martin; Calcagni, Gianluca; Tsujikawa, Shinji
2011-11-18
In the inflationary scenario of loop quantum cosmology in the presence of inverse-volume corrections, we give analytic formulas for the power spectra of scalar and tensor perturbations convenient to compare with observations. Since inverse-volume corrections can provide strong contributions to the running spectral indices, inclusion of terms higher than the second-order runnings in the power spectra is crucially important. Using the recent data of cosmic microwave background and other cosmological experiments, we place bounds on the quantum corrections.
Environmental corrections of a dual-induction logging while drilling tool in vertical wells
NASA Astrophysics Data System (ADS)
Kang, Zhengming; Ke, Shizhen; Jiang, Ming; Yin, Chengfang; Li, Anzong; Li, Junjian
2018-04-01
With the development of Logging While Drilling (LWD) technology, dual-induction LWD logging is not only widely applied in deviated wells and horizontal wells, but it is used commonly in vertical wells. Accordingly, it is necessary to simulate the response of LWD tools in vertical wells for logging interpretation. In this paper, the investigation characteristics, the effects of the tool structure, skin effect and drilling environment of a dual-induction LWD tool are simulated by the three-dimensional (3D) finite element method (FEM). In order to closely simulate the actual situation, real structure of the tool is taking into account. The results demonstrate that the influence of the background value of the tool structure can be eliminated. The values of deducting the background of a tool structure and analytical solution have a quantitative agreement in homogeneous formations. The effect of measurement frequency could be effectively eliminated by chart of skin effect correction. In addition, the measurement environment, borehole size, mud resistivity, shoulder bed, layer thickness and invasion, have an effect on the true resistivity. To eliminate these effects, borehole correction charts, shoulder bed correction charts and tornado charts are computed based on real tool structure. Based on correction charts, well logging data can be corrected automatically by a suitable interpolation method, which is convenient and fast. Verified with actual logging data in vertical wells, this method could obtain the true resistivity of formation.
NASA Astrophysics Data System (ADS)
Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun
2017-10-01
Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.
Nanowire growth kinetics in aberration corrected environmental transmission electron microscopy
Chou, Yi -Chia; Panciera, Federico; Reuter, Mark C.; ...
2016-03-15
Here, we visualize atomic level dynamics during Si nanowire growth using aberration corrected environmental transmission electron microscopy, and compare with lower pressure results from ultra-high vacuum microscopy. We discuss the importance of higher pressure observations for understanding growth mechanisms and describe protocols to minimize effects of the higher pressure background gas.
ERIC Educational Resources Information Center
Swank, Jacqueline M.; Gagnon, Joseph C.
2017-01-01
Background: Mental health screening and assessment is crucial within juvenile correctional facilities (JC). However, limited information is available about the current screening and assessment procedures specifically within JC. Objective: The purpose of the current study was to obtain information about the mental health screening and assessment…
Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.
Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth
2017-05-26
In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.
NASA Technical Reports Server (NTRS)
Mullally, Fergal
2017-01-01
We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.
Quantum gravitational contributions to the cosmic microwave background anisotropy spectrum.
Kiefer, Claus; Krämer, Manuel
2012-01-13
We derive the primordial power spectrum of density fluctuations in the framework of quantum cosmology. For this purpose we perform a Born-Oppenheimer approximation to the Wheeler-DeWitt equation for an inflationary universe with a scalar field. In this way, we first recover the scale-invariant power spectrum that is found as an approximation in the simplest inflationary models. We then obtain quantum gravitational corrections to this spectrum and discuss whether they lead to measurable signatures in the cosmic microwave background anisotropy spectrum. The nonobservation so far of such corrections translates into an upper bound on the energy scale of inflation.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
Wagner, John H; Miskelly, Gordon M
2003-05-01
The combination of photographs taken at two or three wavelengths at and bracketing an absorbance peak indicative of a particular compound can lead to an image with enhanced visualization of the compound. This procedure works best for compounds with absorbance bands that are narrow compared with "average" chromophores. If necessary, the photographs can be taken with different exposure times to ensure that sufficient light from the substrate is detected at all three wavelengths. The combination of images is readily performed if the images are obtained with a digital camera and are then processed using an image processing program. Best results are obtained if linear images at the peak maximum, at a slightly shorter wavelength, and at a slightly longer wavelength are used. However, acceptable results can also be obtained under many conditions if non-linear photographs are used or if only two wavelengths (one of which is at the peak maximum) are combined. These latter conditions are more achievable by many "mid-range" digital cameras. Wavelength selection can either be by controlling the illumination (e.g., by using an alternate light source) or by use of narrow bandpass filters. The technique is illustrated using blood as the target analyte, using bands of light centered at 395, 415, and 435 nm. The extension of the method to detection of blood by fluorescence quenching is also described.
HST/WFC3: Understanding and Mitigating Radiation Damage Effects in the CCD Detectors
NASA Astrophysics Data System (ADS)
Baggett, S.; Anderson, J.; Sosey, M.; MacKenty, J.; Gosmeyer, C.; Noeske, K.; Gunning, H.; Bourque, M.
2015-09-01
At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel resides a 4096x4096 pixel e2v CCD array. While these detectors are performing extremely well after more than 5 years in low-earth orbit, the cumulative effects of radiation damage cause a continual growth in the hot pixel population and a progressive loss in charge transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. Several mitigation options exist, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low background images for a relatively small noise penalty. Currently all WFC3 observers are encouraged to post-flash images with low backgrounds. Another powerful option in mitigating CTE losses is the pixel-based CTE correction. Analagous to the CTE correction software currently in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an empirical observationally-constrained model of how much charge is captured and released in order to reconstruct the image. Applied to images (with or without post-flash) after they are acquired, the software is currently available as a standalone routine. The correction will be incorporated into the standard WFC3 calibration pipeline.
Proceedings of the SMRM Degradation Study Workshop
NASA Technical Reports Server (NTRS)
1985-01-01
The proceedings of the Solar Maximum Repair Mission Degradation Study Workshop, held at the Goddard Space Flight Center in Greenbelt, Maryland on May 9 to 10, 1985 are contained. The results of tests and studies of the returned Solar Maximum Mission hardware and materials are reported. Specifically, the workshop was concerned with the effects of four years' exposure to a low-Earth orbit environment. To provide a background for the reported findings, the summary includes a short description of the Solar Maximum Mission and the Solar Maximum Repair Mission.
Igniter adapter-to-igniter chamber deflection test
NASA Technical Reports Server (NTRS)
Cook, M.
1990-01-01
Testing was performed to determine the maximum RSRM igniter adapter-to-igniter chamber joint deflection at the crown of the inner joint primary seal. The deflection data was gathered to support igniter inner joint gasket resiliency predictions which led to launch commit criteria temperature determinations. The proximity (deflection) gage holes for the first test (Test No. 1) were incorrectly located; therefore, the test was declared a non-test. Prior to Test No. 2, test article configuration was modified with the correct proximity gage locations. Deflection data were successfully acquired during Test No. 2. However, the proximity gage deflection measurements were adversely affected by temperature increases. Deflections measured after the temperature rise at the proximity gages were considered unreliable. An analysis was performed to predict the maximum deflections based on the reliable data measured before the detectable temperature rise. Deflections to the primary seal crown location were adjusted to correspond to the time of maximum expected operating pressure (2,159 psi) to account for proximity gage bias, and to account for maximum attach and special bolt relaxation. The maximum joint deflection for the igniter inner joint at the crown of the primary seal, accounting for all significant correction factors, was 0.0031 in. (3.1 mil). Since the predicted (0.003 in.) and tested maximum deflection values were sufficiently close, the launch commit criteria was not changed as a result of this test. Data from this test should be used to determine if the igniter inner joint gasket seals are capable of maintaining sealing capability at a joint displacement of (1.4) x (0.0031 in.) = 0.00434 inches. Additional testing should be performed to increase the database on igniter deflections and address launch commit criteria temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marques da Silva, A; Narciso, L
Purpose: Commercial workstations usually have their own software to calculate dynamic renal functions. However, usually they have low flexibility and subjectivity on delimiting kidney and background areas. The aim of this paper is to present a public domain software, called RenalQuant, capable to semi-automatically draw regions of interest on dynamic renal scintigraphies, extracting data and generating renal function quantification parameters. Methods: The software was developed in Java and written as an ImageJ-based plugin. The preprocessing and segmentation steps include the user’s selection of one time frame with higher activity in kidney’s region, compared with background, and low activity in themore » liver. Next, the chosen time frame is smoothed using a Gaussian low pass spatial filter (σ = 3) for noise reduction and better delimitation of kidneys. The maximum entropy thresholding method is used for segmentation. A background area is automatically placed below each kidney, and the user confirms if these regions are correctly segmented and positioned. Quantitative data are extracted and each renogram and relative renal function (RRF) value is calculated and displayed. Results: RenalQuant plugin was validated using retrospective 20 patients’ 99mTc-DTPA exams, and compared with results produced by commercial workstation software, referred as reference. The renograms intraclass correlation coefficients (ICC) were calculated and false-negative and false-positive RRF values were analyzed. The results showed that ICC values between RenalQuant plugin and reference software for both kidneys’ renograms were higher than 0.75, showing excellent reliability. Conclusion: Our results indicated RenalQuant plugin can be trustingly used to generate renograms, using DICOM dynamic renal scintigraphy exams as input. It is user friendly and user’s interaction occurs at a minimum level. Further studies have to investigate how to increase RRF accuracy and explore how to solve limitations in the segmentation step, mainly when background region has higher activity compared to kidneys. Financial support by CAPES.« less
49 CFR Appendix E to Part 227 - Use of Insert Earphones for Audiometric Testing
Code of Federal Regulations, 2010 CFR
2010-10-01
... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. E Appendix.... B. Technicians who conduct audiometric tests must be trained to insert the earphones correctly into... audiometer. IV. Background Noise Levels Testing shall be conducted in a room where the background ambient...
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Lühr, B; Scheller, J; Meyer, P; Kramer, W
1998-02-01
We have analysed the correction of defined mismatches in wild-type and msh2, msh3, msh6 and msh3 msh6 mutants of Saccharomyces cerevisiae in two different yeast strain backgrounds by transformation with plasmid heteroduplex DNA constructs. Ten different base/base mismatches, two single-nucleotide loops and a 38-nucleotide loop were tested. Repair of all types of mismatches was severely impaired in msh2 and msh3 msh6 mutants. In msh6 mutants, repair efficiency of most base/base mismatches was reduced to a similar extent as in msh3 msh6 double mutants. G/T and A/C mismatches, however, displayed residual repair in msh6 mutants in one strain background, implying a role for Msh3p in recognition of base/base mismatches. Furthermore, the efficiency of repair of base/base mismatches was considerably reduced in msh3 mutants in one strain background, indicating a requirement for MSH3 for fully efficient mismatch correction. Also the efficiency of repair of the 38-nucleotide loop was reduced in msh3 mutants, and to a lesser extent in msh6 mutants. The single-nucleotide loop with an unpaired A was less efficiently repaired in msh3 mutants and that with an unpaired T was less efficiently corrected in msh6 mutants, indicating non-redundant functions for the two proteins in the recognition of single-nucleotide loops.
Smith, Kenneth J
2011-02-01
Lambert and Hogan (2010) examined the relations between work-family conflict, role stress, and other noted predictors, on reported emotional exhaustion among a sample of 272 correctional staff at a maximum security prison. Using an ordinary least squares (OLS) regression model, the authors found work-on-family conflict, perceived dangerousness of the job, and role strain to have positive relations with emotional exhaustion. However, contrary to expectation they found that custody officers reported lower exhaustion than did their noncustody staff counterparts. Suggestions are provided for follow-up efforts designed to extend this line of research and correct methodological issues.
Volterra Transfer Functions from Pulse Tests for Mildly Nonlinear Channels.
1983-07-01
printing(1215) = 0 minimal printing = 1 maximum printing IFIX noise correction option = 0 no correction S1 do noise correction IPLI and IPL2 plot options...TITLE READ(5,1)N1,tI,ISKIP,IREM,IBIAS,IO)LY,DELTA, +OSAV,VAR,DFAC READ(5,1021)TITLE READ(5,I001)IPR, IFIX,IPL1, IPL2 IF(II.EQ.10000)GO To 4320...1)VIRITE (6,1004) IF(IPLT.EQ.1.OR. IPLT.EQ.3) +CALL PLOTIT(DATA2,2,11I~l,,rP,ISEIP,NIAXPL,1,1.0) IFCIPLT. EQ.1.OR. IPL2 .EQ. 0)GO TO 6544 C LABEL(18
NASA Technical Reports Server (NTRS)
Borsody, J.
1976-01-01
Equations are derived by using the maximum principle to maximize the payload of a reusable tug for planetary missions. The analysis includes a correction for precession of the space shuttle orbit. The tug returns to this precessed orbit (within a specified time) and makes the required nodal correction. A sample case is analyzed that represents an inner planet mission as specified by a fixed declination and right ascension of the outgoing asymptote and the mission energy. The reusable stage performance corresponds to that of a typical cryogenic tug. Effects of space shuttle orbital inclination, several trajectory parameters, and tug thrust on payload are also investigated.
Roosevelt Hot Springs, Utah FORGE Stress Logging Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLennan, John
This spreadsheet consist of data and graphs from deep well 58-32 stress testing from 6900 - 7500 ft depth. Measured stress data were used to correct logging predictions of in situ stress. Stress plots shows pore pressure (measured during the injection testing), the total vertical in situ stress (determined from the density logging) and the total maximum and minimum horizontal stresses. The horizontal stresses were determined from the DSI (Dipole Sonic Imager) and corrected to match the direct measurements.
Reconstruction of Laser-Induced Surface Topography from Electron Backscatter Diffraction Patterns.
Callahan, Patrick G; Echlin, McLean P; Pollock, Tresa M; De Graef, Marc
2017-08-01
We demonstrate that the surface topography of a sample can be reconstructed from electron backscatter diffraction (EBSD) patterns collected with a commercial EBSD system. This technique combines the location of the maximum background intensity with a correction from Monte Carlo simulations to determine the local surface normals at each point in an EBSD scan. A surface height map is then reconstructed from the local surface normals. In this study, a Ni sample was machined with a femtosecond laser, which causes the formation of a laser-induced periodic surface structure (LIPSS). The topography of the LIPSS was analyzed using atomic force microscopy (AFM) and reconstructions from EBSD patterns collected at 5 and 20 kV. The LIPSS consisted of a combination of low frequency waviness due to curtaining and high frequency ridges. The morphology of the reconstructed low frequency waviness and high frequency ridges matched the AFM data. The reconstruction technique does not require any modification to existing EBSD systems and so can be particularly useful for measuring topography and its evolution during in situ experiments.
Audiologic and subjective evaluation of Baha® Attract device.
Pérez-Carbonell, Tomàs; Pla-Gil, Ignacio; Redondo-Martínez, Jaume; Morant-Ventura, Antonio; García-Callejo, Francisco Javier; Marco-Algarra, Jaime
We included 9 patients implanted with Baha ® Attract. All our patients were evaluated by free field tonal audiometry, free field verbal audiometry and free field verbal audiometry with background noise, all the tests were performed with and without the device. To evaluate the subjective component of the implantation, we used the Glasgow Benefit Inventory (GBI) and Abbreviated Profile of Hearing Aid Benefit (APHAB). The auditive assessment with the device showed average auditive thresholds of 35.8dB with improvements of 25.8dB over the previous situation. Speech reception thresholds were 37dB with Baha ® Attract, showing improvements of 23dB. Maximum discrimination thresholds showed an average gain of 60dB with the device. Baha ® Attract achieves auditive improvements in patients for whom it is correctly indicated, with a consequent positive subjective evaluation. This study shows the attenuation effect in transcutaneous transmission, that prevents the device achieving greater improvements. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
Improving Earth/Prediction Models to Improve Network Processing
NASA Astrophysics Data System (ADS)
Wagner, G. S.
2017-12-01
The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.
Finite temperature corrections to tachyon mass in intersecting D-branes
NASA Astrophysics Data System (ADS)
Sethi, Varun; Chowdhury, Sudipto Paul; Sarkar, Swarnendu
2017-04-01
We continue with the analysis of finite temperature corrections to the Tachyon mass in intersecting branes which was initiated in [1]. In this paper we extend the computation to the case of intersecting D3 branes by considering a setup of two intersecting branes in flat-space background. A holographic model dual to BCS superconductor consisting of intersecting D8 branes in D4 brane background was proposed in [2]. The background considered here is a simplified configuration of this dual model. We compute the one-loop Tachyon amplitude in the Yang-Mills approximation and show that the result is finite. Analyzing the amplitudes further we numerically compute the transition temperature at which the Tachyon becomes massless. The analytic expressions for the one-loop amplitudes obtained here reduce to those for intersecting D1 branes obtained in [1] as well as those for intersecting D2 branes.
Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn
2008-09-30
The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.
Rips, L J
2001-03-01
According to one view of reasoning, people can evaluate arguments in at least two qualitatively different ways: in terms of their deductive correctness and in terms of their inductive strength. According to a second view, assessments of both correctness and strength are a function of an argument's position on a single psychological continuum (e.g., subjective conditional probability). A deductively correct argument is one with the maximum value on this continuum; a strong argument is one with a high value. The present experiment tested these theories by asking participants to evaluate the same set of arguments for correctness and strength. The results produced an interaction between type of argument and instructions: In some conditions, participants judged one argument deductively correct more often than a second, but judged the second argument inductively strong more often than the first. This finding supports the view that people have distinct ways to evaluate arguments.
Single molecule sequencing-guided scaffolding and correction of draft assemblies.
Zhu, Shenglong; Chen, Danny Z; Emrich, Scott J
2017-12-06
Although single molecule sequencing is still improving, the lengths of the generated sequences are inevitably an advantage in genome assembly. Prior work that utilizes long reads to conduct genome assembly has mostly focused on correcting sequencing errors and improving contiguity of de novo assemblies. We propose a disassembling-reassembling approach for both correcting structural errors in the draft assembly and scaffolding a target assembly based on error-corrected single molecule sequences. To achieve this goal, we formulate a maximum alternating path cover problem. We prove that this problem is NP-hard, and solve it by a 2-approximation algorithm. Our experimental results show that our approach can improve the structural correctness of target assemblies in the cost of some contiguity, even with smaller amounts of long reads. In addition, our reassembling process can also serve as a competitive scaffolder relative to well-established assembly benchmarks.
Maximum drag reduction simulation using rodlike polymers.
Gillissen, J J J
2012-10-01
Simulations of maximum drag reduction (MDR) in channel flow using constitutive equations for suspensions of noninteracting rods predict a few-fold larger turbulent kinetic energy than in experiments using rodlike polymers. These differences are attributed to the neglect of interactions between polymers in the simulations. Despite these inconsistencies the simulations correctly reproduce the essential features of MDR, with universal profiles of the mean flow and the shear stress budgets that do not depend on the polymer concentration.
The Industrial Energy Consumers of America (IECA) joins the U.S. Chamber of Commerce in its request for correction of information developed by the Environmental Protection Agency (EPA) in a background technical support document titled Greenhouse Gas Emissions Reporting from the Petroleum and Natural Gas Industry
Practitioner Review: Use of Antiepileptic Drugs in Children
ERIC Educational Resources Information Center
Guerrini, Renzo; Parmeggiani, Lucio
2006-01-01
Background: The aim in treating epilepsy is to minimise or control seizures with full respect of quality-of-life issues, especially of cognitive functions. Optimal treatment first demands a correct recognition of the major type of seizures, followed by a correct diagnosis of the type of epilepsy or of the specific syndrome. Methods: Review of data…
75 FR 44901 - Extended Carryback of Losses to or From a Consolidated Group; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-30
.... 7805 * * * 0 Par. 2. Section 1.1502-21T(b)(3)(v) is amended by revising paragraphs (B), (C)(1), (C)(2...: Grid Glyer, (202) 622-7930 (not a toll-free number). SUPPLEMENTARY INFORMATION: Background The final... in 26 CFR Part 1 Income taxes, Reporting and recordkeeping requirements. Correction of Publication 0...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... correction is effective July 7, 2010. FOR FURTHER INFORMATION CONTACT: Dr. Lisa Rotterman (907-271-1692), lisa[email protected] . SUPPLEMENTARY INFORMATION: Background On June 29, 2010, NMFS published a... lion (75 FR 37385). NMFS inadvertently gave incorrect e-mail and fax information. The correct email is...
ERIC Educational Resources Information Center
McCray, Erica D.; Ribuffo, Cecelia; Lane, Holly; Murphy, Kristin M.; Gagnon, Joseph C.; Houchins, David E.; Lambert, Richard G.
2018-01-01
Background: The well-documented statistics regarding the academic struggles of incarcerated youth are disconcerting, and efforts to improve reading performance among this population are greatly needed. There is a dearth of research that provides rich and detailed accounts of reading intervention implementation in the juvenile corrections setting.…
Savalei, Victoria
2018-01-01
A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.
Lindhardt, T B; Hesse, B; Gadsbøll, N
1997-01-01
The purpose of this study was to determine the accuracy of determinations of left ventricular ejection fraction (LVEF) by a nonimaging miniature nuclear detector system (Cardioscint) and to evaluate the feasibility of long-term LVEF monitoring in patients admitted to the coronary care unit, with special reference to the blood-labeling technique. Cardioscint LVEF values were compared with measurements of LVEF by conventional gamma camera radionuclide ventriculography in 33 patients with a wide range of LVEF values. In 21 of the 33 patients, long-term monitoring was carried out for 1 to 4 hours (mean 186 minutes), with three different kits: one for in vivo and two for in vitro red blood cell labeling. The stability of the labeling was assessed by determination of the activity of blood samples taken during the first 24 hours after blood labeling. The agreement between Cardioscint LVEF and gamma camera LVEF was good with automatic background correction (r = 0.82; regression equation y = 1.04x + 3.88) but poor with manual background correction (r = 0.50; y = 0.88x - 0.55). The agreement was highest in patients without wall motion abnormalities. The long-term monitoring showed no difference between morning and afternoon Cardioscint LVEF values. Short-lasting fluctuations in LVEFs greater than 10 EF units were observed in the majority of the patients. After 24 hours, the mean reduction in the physical decay-corrected count rate of the blood samples was most pronounced for the two in vitro blood-labeling kits (57% +/- 9% and 41% +/- 3%) and less for the in vivo blood-labeling kit (32% +/- 26%). This "biologic decay" had a marked influence on the Cardioscint monitoring results, demanding frequent background correction. A fairly accurate estimate of LVEF can be obtained with the nonimaging Cardioscint system, and continuous bedside LVEF monitoring can proceed for hours with little inconvenience to the patients. Instability of the red blood cell labeling during long-term monitoring necessitates frequent background correction.
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
Analysis of different models for atmospheric correction of meteosat infrared images. A new approach
NASA Astrophysics Data System (ADS)
Pérez, A. M.; Illera, P.; Casanova, J. L.
A comparative study of several atmospheric correction models has been carried out. As primary data, atmospheric profiles of temperature and humidity obtained from radiosoundings on cloud-free days have been used. Special attention has been paid to the model used operationally in the European Space operations Centre (ESOC) for sea temperature calculations. The atmospheric correction results are expressed in terms of the increase in the brightness temperature and the surface temperature. A difference of up to a maximum of 1.4 degrees with respect to the correction obtained in the studied models has been observed. The radiances calculated by models are also compared with those obtained directly from the satellite. The temperature corrections by the latter are greater than the former in practically every case. As a result of this, the operational calibration coefficients should be first recalculated if we wish to apply an atmospheric correction model to the satellite data. Finally, a new simplified calculation scheme which may be introduced into any model is proposed.
Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.
Dixit, Purushottam D; Dill, Ken A
2018-02-13
Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
Evaluation of the Klobuchar model in TaiWan
NASA Astrophysics Data System (ADS)
Li, Jinghua; Wan, Qingtao; Ma, Guanyi; Zhang, Jie; Wang, Xiaolan; Fan, Jiangtao
2017-09-01
Ionospheric delay is the mainly error source in Global Navigation Satellite System (GNSS). Ionospheric model is one of the ways to correct the ionospheric delay. The single-frequency GNSS users modify the ionospheric delay by receiving the correction parameters broadcasted by satellites. Klobuchar model is widely used in Global Positioning System (GPS) and COMPASS because it is simple and convenient for real-time calculation. This model is established on the observations mainly from Europe and USA. It does not describe the equatorial anomaly region. South of China is located near the north crest of the equatorial anomaly, where the ionosphere has complex spatial and temporal variation. The assessment on the validation of Klobuchar model in this area is important to improve this model. Eleven years (2003-2014) data from one GPS receiver located at Taoyuan Taiwan (121°E, 25°N) are used to assess the validation of Klobuchar model in Taiwan. Total electron content (TEC) from the dual-frequency GPS observations is calculated and used as the reference, and TEC based on the Klobuchar model is compared with the reference. The residual is defined as the difference between the TEC from Klobuchar model and the reference. It is a parameter to reflect the absolute correction of the model. RMS correction percentage presents the validation of the model relative to the observations. The residuals' long-term variation, the RMS correction percentage, and their changes with the latitudes are analyzed respectively to access the model. In some months the RMS correction did not reach the goal of 50% purposed by Klobuchar, especially in the winter of the low solar activity years and at nighttime. RMS correction did not depend on the 11-years solar activity, neither the latitudes. Different from RMS correction, the residuals changed with the solar activity, similar to the variation of TEC. The residuals were large in the daytime, during the equinox seasons and in the high solar activity years; they are small at night, during the solstice seasons, and in the low activity years. During 1300-1500 BJT in the high solar activity years, the mean bias was negative, implying the model underestimated TEC on average. The maximum mean bias was 33TECU in April 2014, and the maximum underestimation reached 97TECU in October 2011. During 0000-0200 BJT, the residuals had small mean bias, small variation range and small standard deviation. It suggested that the model could describe the TEC of the ionosphere better than that in the daytime. Besides the variation with the solar activity, the residuals also vary with the latitudes. The means bias reached the maximum at 20-22°N, corresponding to the north crest of the equatorial anomaly. At this latitude, the maximum mean bias was 47TECU lower than the observation in the high activity years, and 12TECU lower in the low activity years. The minimum variation range appeared at 30-32°N in high and low activity years. But the minimum mean bias was at different latitudes in the high and low activity years. In the high activity years, it appeared at 30-32°N, and in the low years it was at 24-26°N. For an ideal model, the residuals should have small mean bias and small variation range. Further study is needed to learn the distribution of the residuals and to improve the model.
Working on Extremum Problems with the Help of Dynamic Geometry Systems
ERIC Educational Resources Information Center
Gortcheva, Iordanka
2013-01-01
Two problems from high school mathematics on finding minimum or maximum are discussed. The focus is on students' approaches and difficulties in identifying a correct solution and how dynamic geometry systems can help.
E-ELT M5 field stabilisation unit scale 1 demonstrator design and performances evaluation
NASA Astrophysics Data System (ADS)
Casalta, J. M.; Barriga, J.; Ariño, J.; Mercader, J.; San Andrés, M.; Serra, J.; Kjelberg, I.; Hubin, N.; Jochum, L.; Vernet, E.; Dimmler, M.; Müller, M.
2010-07-01
The M5 Field stabilization Unit (M5FU) for European Extremely Large Telescope (E-ELT) is a fast correcting optical system that shall provide tip-tilt corrections for the telescope dynamic pointing errors and the effect of atmospheric tiptilt and wind disturbances. A M5FU scale 1 demonstrator (M5FU1D) is being built to assess the feasibility of the key elements (actuators, sensors, mirror, mirror interfaces) and the real-time control algorithm. The strict constraints (e.g. tip-tilt control frequency range 100Hz, 3m ellipse mirror size, mirror first Eigen frequency 300Hz, maximum tip/tilt range +/- 30 arcsec, maximum tiptilt error < 40 marcsec) have been a big challenge for developing the M5FU Conceptual Design and its scale 1 demonstrator. The paper summarises the proposed design for the final unit and demonstrator and the measured performances compared to the applicable specifications.
Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki
2017-01-01
This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974
Rohrman, Brittany; Richards-Kortum, Rebecca
2015-02-03
Recombinase polymerase amplification (RPA) may be used to detect a variety of pathogens, often after minimal sample preparation. However, previous work has shown that whole blood inhibits RPA. In this paper, we show that the concentrations of background DNA found in whole blood prevent the amplification of target DNA by RPA. First, using an HIV-1 RPA assay with known concentrations of nonspecific background DNA, we show that RPA tolerates more background DNA when higher HIV-1 target concentrations are present. Then, using three additional assays, we demonstrate that the maximum amount of background DNA that may be tolerated in RPA reactions depends on the DNA sequences used in the assay. We also show that changing the RPA reaction conditions, such as incubation time and primer concentration, has little effect on the ability of RPA to function when high concentrations of background DNA are present. Finally, we develop and characterize a lateral flow-based method for enriching the target DNA concentration relative to the background DNA concentration. This sample processing method enables RPA of 10(4) copies of HIV-1 DNA in a background of 0-14 μg of background DNA. Without lateral flow sample enrichment, the maximum amount of background DNA tolerated is 2 μg when 10(6) copies of HIV-1 DNA are present. This method requires no heating or other external equipment, may be integrated with upstream DNA extraction and purification processes, is compatible with the components of lysed blood, and has the potential to detect HIV-1 DNA in infant whole blood with high proviral loads.
Air core detectors for Cerenkov-free scintillation dosimetry of brachytherapy β-sources.
Eichmann, Marion; Thomann, Benedikt
2017-09-01
Plastic scintillation detectors are used for dosimetry in small radiation fields with high dose gradients, e.g., provided by β-emitting sources like 106 Ru/ 106 Rh eye plaques. A drawback is a background signal caused by Cerenkov radiation generated by electrons passing the optical fibers (light guides) of this dosimetry system. Common approaches to correct for the Cerenkov signal are influenced by uncertainties resulting from detector positioning and calibration procedures. A different approach to avoid any correction procedure is to suppress the Cerenkov signal by replacing the solid core optical fiber with an air core light guide, previously shown for external beam therapy. In this study, the air core concept is modified and applied to the requirements of dosimetry in brachytherapy, proving its usability for measuring water energy doses in small radiation fields. Three air core detectors with different air core lengths are constructed and their performance in dosimetry for brachytherapy β-sources is compared with a standard two-fiber system, which uses a second fiber for Cerenkov correction. The detector systems are calibrated with a 90 Sr/ 90 Y secondary standard and tested for their angular dependence as well as their performance in depth dose measurements of 106 Ru/ 106 Rh sources. The signal loss relative to the standard detector increases with increasing air core length to a maximum value of 58.3%. At the same time, however, the percentage amount of Cerenkov light in the total signal is reduced from at least 12.1% to a value below 1.1%. There is a linear correlation between induced dose and measured signal current. The air core detectors determine the dose rates for 106 Ru/ 106 Rh sources without any form of correction for the Cerenkov signal. The air core detectors show advantages over the standard two-fiber system especially when measuring in radiation fields with high dose gradients. They can be used as simple one-fiber systems and allow for an almost Cerenkov-free scintillation dosimetry of brachytherapy β-sources. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Nery, Jean Paul; Allen, Philip B.
2016-09-01
We develop a simple method to study the zero-point and thermally renormalized electron energy ɛk n(T ) for k n the conduction band minimum or valence maximum in polar semiconductors. We use the adiabatic approximation, including an imaginary broadening parameter i δ to suppress noise in the density-functional integrations. The finite δ also eliminates the polar divergence which is an artifact of the adiabatic approximation. Nonadiabatic Fröhlich polaron methods then provide analytic expressions for the missing part of the contribution of the problematic optical phonon mode. We use this to correct the renormalization obtained from the adiabatic approximation. Test calculations are done for zinc-blende GaN for an 18 ×18 ×18 integration grid. The Fröhlich correction is of order -0.02 eV for the zero-point energy shift of the conduction band minimum, and +0.03 eV for the valence band maximum; the correction to renormalization of the 3.28 eV gap is -0.05 eV, a significant fraction of the total zero point renormalization of -0.15 eV.
Bias correction for estimated QTL effects using the penalized maximum likelihood method.
Zhang, J; Yue, C; Zhang, Y-M
2012-04-01
A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.
Integrating paleoecology and genetics of bird populations in two sky island archipelagos
McCormack, John E; Bowen, Bonnie S; Smith, Thomas B
2008-01-01
Background Genetic tests of paleoecological hypotheses have been rare, partly because recent genetic divergence is difficult to detect and time. According to fossil plant data, continuous woodland in the southwestern USA and northern Mexico became fragmented during the last 10,000 years, as warming caused cool-adapted species to retreat to high elevations. Most genetic studies of resulting 'sky islands' have either failed to detect recent divergence or have found discordant evidence for ancient divergence. We test this paleoecological hypothesis for the region with intraspecific mitochondrial DNA and microsatellite data from sky-island populations of a sedentary bird, the Mexican jay (Aphelocoma ultramarina). We predicted that populations on different sky islands would share common, ancestral alleles that existed during the last glaciation, but that populations on each sky island, owing to their isolation, would contain unique variants of postglacial origin. We also predicted that divergence times estimated from corrected genetic distance and a coalescence model would post-date the last glacial maximum. Results Our results provide multiple independent lines of support for postglacial divergence, with the predicted pattern of shared and unique mitochondrial DNA haplotypes appearing in two independent sky-island archipelagos, and most estimates of divergence time based on corrected genetic distance post-dating the last glacial maximum. Likewise, an isolation model based on multilocus gene coalescence indicated postglacial divergence of five pairs of sky islands. In contrast to their similar recent histories, the two archipelagos had dissimilar historical patterns in that sky islands in Arizona showed evidence for older divergence, suggesting different responses to the last glaciation. Conclusion This study is one of the first to provide explicit support from genetic data for a postglacial divergence scenario predicted by one of the best paleoecological records in the world. Our results demonstrate that sky islands act as generators of genetic diversity at both recent and historical timescales and underscore the importance of thorough sampling and the use of loci with fast mutation rates to studies that test hypotheses concerning recent genetic divergence. PMID:18588695
Occupations at Case Closure for Vocational Rehabilitation Applicants with Criminal Backgrounds
ERIC Educational Resources Information Center
Whitfield, Harold Wayne
2009-01-01
The purpose of this study was to identify industries that hire persons with disabilities and criminal backgrounds. The researcher obtained data on 1,355 applicants for vocational rehabilitation services who were living in adult correctional facilities at the time of application. Service-based industries hired the most ex-inmates with disabilities…
Graviton propagator from background-independent quantum gravity.
Rovelli, Carlo
2006-10-13
We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.
A two-dimensional ACAR study of untwinned YBa2Cu3O(7-x)
NASA Astrophysics Data System (ADS)
Smedskjaer, L. C.; Bansil, A.
1991-12-01
We have carried out 2D-ACAR measurements on an untwinned single crystal of YBa2Cu3O(sub 7-x) as a function of temperature, for five temperatures ranging from 30K to 300K. We show that these temperature-dependent 2D-ACAR spectra can be described to a good approximation as a superposition of two temperature independent spectra with temperature-dependent weighting factors. We show further how the data can be used to correct for the 'background' in the experimental spectrum. Such a 'background corrected' spectrum is in remarkable accord with the corresponding band theory predictions, and displays, in particular, clear signatures of the electron ridge Fermi surface.
Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
Depending on detector number, there are random fluctuations in the background level for spectral band 1 of magnitudes ranging from 2 to 3.5 digital numbers (DN). Similar variability is observed in all the other reflective bands, but with smaller magnitude in the range 0.5 to 2.5 DN. Observations of background reference levels show that line dependent variations in raw TM image data and in the associated calibration data can be measured and corrected within an operational environment by applying simple offset corrections on a line-by-line basis. The radiometric calibration procedure defined by the Canadian Center for Remote Sensing was revised accordingly in order to prevent striping in the output product.
McCaw, Travis J; Micka, John A; Dewerd, Larry A
2011-10-01
Gafchromic(®) EBT2 film has a yellow marker dye incorporated into the active layer of the film that can be used to correct the film response for small variations in thickness. This work characterizes the effect of the marker-dye correction on the uniformity and uncertainty of dose measurements with EBT2 film. The effect of variations in time postexposure on the uniformity of EBT2 is also investigated. EBT2 films were used to measure the flatness of a (60)Co field to provide a high-spatial resolution evaluation of the film uniformity. As a reference, the flatness of the (60)Co field was also measured with Kodak EDR2 films. The EBT2 films were digitized with a flatbed document scanner 24, 48, and 72 h postexposure, and the images were analyzed using three methods: (1) the manufacturer-recommended marker-dye correction, (2) an in-house marker-dye correction, and (3) a net optical density (OD) measurement in the red color channel. The field flatness was calculated from orthogonal profiles through the center of the field using each analysis method, and the results were compared with the EDR2 measurements. Uncertainty was propagated through a dose calculation for each analysis method. The change in the measured field flatness for increasing times postexposure was also determined. Both marker-dye correction methods improved the field flatness measured with EBT2 film relative to the net OD method, with a maximum improvement of 1% using the manufacturer-recommended correction. However, the manufacturer-recommended correction also resulted in a dose uncertainty an order of magnitude greater than the other two methods. The in-house marker-dye correction lowered the dose uncertainty relative to the net OD method. The measured field flatness did not exhibit any unidirectional change with increasing time postexposure and showed a maximum change of 0.3%. The marker dye in EBT2 can be used to improve the response uniformity of the film. Depending on the film analysis method used, however, application of a marker-dye correction can improve or degrade the dose uncertainty relative to the net OD method. The uniformity of EBT2 was found to be independent of the time postexposure.
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.
Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A
2017-01-01
Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Knowledge of appropriate acetaminophen use: A survey of college-age women.
Stumpf, Janice L; Liao, Allison C; Nguyen, Stacy; Skyles, Amy J; Alaniz, Cesar
To evaluate college-age women's knowledge of appropriate doses and potential toxicities of acetaminophen, competency in interpreting Drug Facts label dosing information, and ability to recognize products containing acetaminophen. In this cross-sectional prospective study, a 20-item written survey was provided to female college students at a University of Michigan fundraising event in March 2015. A total of 203 female college students, 18-24 years of age, participated in the study. Pain was experienced on a daily or weekly basis by 22% of the subjects over the previous 6 months, and 83% reported taking acetaminophen. The maximum 3-gram daily dose of extra-strength acetaminophen was correctly identified by 64 participants; an additional 51 subjects indicated the generally accepted 4 grams daily as the maximum dose. When provided with the Tylenol Drug Facts label, 68.5% correctly identified the maximum amount of regular-strength acetaminophen recommended for a healthy adult. Hepatotoxicity was associated with high acetaminophen doses by 63.6% of participants, significantly more than those who selected distracter responses (P < 0.001). Knowledge of liver damage as a potential toxicity was correlated with age 20 years and older (P < 0.001) but was independent from race and ethnicity and level of alcohol consumption. Although more than one-half of the subjects (58.6%) recognized that Tylenol contained acetaminophen, fewer than one-fourth correctly identified other acetaminophen-containing products. Despite ongoing educational campaigns, a large proportion of the college-age women who participated in our study did not know and could not interpret the maximum recommended daily dose from Drug Facts labeling, did not know that liver damage was a potential toxicity of acetaminophen, and could not recognize acetaminophen-containing products. These data suggest a continued role for pharmacists in educational efforts targeted to college-age women. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Large-scale fluctuations in the cosmic ionizing background: the impact of beamed source emission
NASA Astrophysics Data System (ADS)
Suarez, Teresita; Pontzen, Andrew
2017-12-01
When modelling the ionization of gas in the intergalactic medium after reionization, it is standard practice to assume a uniform radiation background. This assumption is not always appropriate; models with radiative transfer show that large-scale ionization rate fluctuations can have an observable impact on statistics of the Lyman α forest. We extend such calculations to include beaming of sources, which has previously been neglected but which is expected to be important if quasars dominate the ionizing photon budget. Beaming has two effects: first, the physical number density of ionizing sources is enhanced relative to that directly observed; and secondly, the radiative transfer itself is altered. We calculate both effects in a hard-edged beaming model where each source has a random orientation, using an equilibrium Boltzmann hierarchy in terms of spherical harmonics. By studying the statistical properties of the resulting ionization rate and H I density fields at redshift z ∼ 2.3, we find that the two effects partially cancel each other; combined, they constitute a maximum 5 per cent correction to the power spectrum P_{H I}(k) at k = 0.04 h Mpc-1. On very large scales (k < 0.01 h Mpc-1) the source density renormalization dominates; it can reduce, by an order of magnitude, the contribution of ionizing shot noise to the intergalactic H I power spectrum. The effects of beaming should be considered when interpreting future observational data sets.
77 FR 48112 - Pipeline Safety: Administrative Procedures; Updates and Technical Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-13
...This Notice of Proposed Rulemaking updates the administrative civil penalty maximums for violation of the pipeline safety regulations to conform to current law, updates the informal hearing and adjudication process for pipeline enforcement matters to conform to current law, amends other administrative procedures used by PHMSA personnel, and makes other technical corrections and updates to certain administrative procedures. The proposed amendments do not impose any new operating, maintenance, or other substantive requirements on pipeline owners or operators.
2013-01-01
Background The purpose of this study was to evaluate the impact of Cone Beam CT (CBCT) based setup correction on total dose distributions in fractionated frameless stereotactic radiation therapy of intracranial lesions. Methods Ten patients with intracranial lesions treated with 30 Gy in 6 fractions were included in this study. Treatment planning was performed with Oncentra® for a SynergyS® (Elekta Ltd, Crawley, UK) linear accelerator with XVI® Cone Beam CT, and HexaPOD™ couch top. Patients were immobilized by thermoplastic masks (BrainLab, Reuther). After initial patient setup with respect to lasers, a CBCT study was acquired and registered to the planning CT (PL-CT) study. Patient positioning was corrected according to the correction values (translational, rotational) calculated by the XVI® system. Afterwards a second CBCT study was acquired and registered to the PL-CT to confirm the accuracy of the corrections. An in-house developed software was used for rigid transformation of the PL-CT to the CBCT geometry, and dose calculations for each fraction were performed on the transformed CT. The total dose distribution was achieved by back-transformation and summation of the dose distributions of each fraction. Dose distributions based on PL-CT, CBCT (laser set-up), and final CBCT were compared to assess the influence of setup inaccuracies. Results The mean displacement vector, calculated over all treatments, was reduced from (4.3 ± 1.3) mm for laser based setup to (0.5 ± 0.2) mm if CBCT corrections were applied. The mean rotational errors around the medial-lateral, superior-inferior, anterior-posterior axis were reduced from (−0.1 ± 1.4)°, (0.1 ± 1.2)° and (−0.2 ± 1.0)°, to (0.04 ± 0.4)°, (0.01 ± 0.4)° and (0.02 ± 0.3)°. As a consequence the mean deviation between planned and delivered dose in the planning target volume (PTV) could be reduced from 12.3% to 0.4% for D95 and from 5.9% to 0.1% for Dav. Maximum deviation was reduced from 31.8% to 0.8% for D95, and from 20.4% to 0.1% for Dav. Conclusion Real dose distributions differ substantially from planned dose distributions, if setup is performed according to lasers only. Thermoplasic masks combined with a daily CBCT enabled a sufficient accuracy in dose distribution. PMID:23800172
Comparison of backgrounds in OSO-7 and SMM spectrometers and short-term activation in SMM
NASA Technical Reports Server (NTRS)
Dunphy, P. P.; Forrest, D. J.; Chupp, E. L.; Share, G. H.
1989-01-01
The backgrounds in the OSO-7 Gamma-Ray Monitor and the Solar Maximum Mission Gamma-Ray Spectrometer are compared. After scaling to the same volume, the background spectra agree to within 30 percent. This shows that analyses which successfully describe the background in one detector can be applied to similar detectors of different sizes and on different platforms. The background produced in the SMM spectrometer by a single trapped-radiation belt passage is also studied. This background is found to be dominated by a positron-annihilation line and a continuum spectrum with a high energy cutoff at 5 MeV.
ERIC Educational Resources Information Center
Fine, Michelle; Torre, Maria Elena; Boudin, Kathy; Bowen, Iris; Clark, Judith; Hylton, Donna; Martinez, Migdalia; Missy; Roberts, Rosemarie A.; Smart, Pamela; Upegui, Debora
The impact of college on women in a maximum-security prison was examined in a 3-year study of current and former inmates of New York's Bedford Hills Correctional Facility (BHCF). The data sources were as follows: (1) a review of program records; (2) one-on-one interviews of 65 inmates conducted by 15 inmates; (3) focus groups with 43 women in BHCF…
Resonant tube for measurement of sound absorption in gases at low frequency/pressure ratios
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.; Griffin, W. A.
1980-01-01
The paper describes a resonant tube for measuring sound absorption in gases, with specific emphasis on the vibrational relaxation peak of N2, over a range of frequency/pressure ratios from 0.1 to 2500 Hz/atm. The experimental background losses measured in argon agree with the theoretical wall losses except at few isolated frequencies. Rigid cavity terminations, external excitation, and a differential technique of background evaluation were used to minimize spurious contributions to the background losses. Room temperature measurements of sound absorption in binary mixtures of N2-CO2 in which both components are excitable resulted in the maximum frequency/pressure ratio in Hz/atm of 0.063 + 123m for the N2 vibrational relaxation peak, where m is mole percent of added CO2; the maximum ratio for the CO2 peak was 34,500 268m where m is mole percent of added N2.
HST/WFC3: Evolution of the UVIS Channel's Charge Transfer Efficiency
NASA Astrophysics Data System (ADS)
Gosmeyer, Catherine; Baggett, Sylvia M.; Anderson, Jay; WFC3 Team
2016-06-01
The Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) contains both an IR and a UVIS channel. After more than six years on orbit, the UVIS channel performance remains stable; however, on-orbit radiation damage has caused the charge transfer efficiency (CTE) of UVIS's two CCDs to degrade. This degradation is seen as vertical charge 'bleeding' from sources during readout and its effect evolves as the CCDs age. The WFC3 team has developed software to perform corrections that push the charge back to the sources, although it cannot recover faint sources that have been bled out entirely. Observers can mitigate this effect in various ways such as by placing sources near the amplifiers, observing bright targets, and by increasing the total background to at least 12 electrons, either by using a broader filter, lengthening exposure time, or post-flashing. We present results from six years of calibration data to re-evaluate the best level of total background for mitigating CTE loss and to re-verify that the pixel-based CTE correction software is performing optimally over various background levels. In addition, we alert observers that CTE-corrected products are now available for retrieval from MAST as part of the CALWF3 v3.3 pipeline upgrade.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
Item and source memory for emotional associates is mediated by different retrieval processes.
Ventura-Bort, Carlos; Dolcos, Florin; Wendt, Julia; Wirkner, Janine; Hamm, Alfons O; Weymar, Mathias
2017-12-12
Recent event-related potential (ERP) data showed that neutral objects encoded in emotional background pictures were better remembered than objects encoded in neutral contexts, when recognition memory was tested one week later. In the present study, we investigated whether this long-term memory advantage for items is also associated with correct memory for contextual source details. Furthermore, we were interested in the possibly dissociable contribution of familiarity and recollection processes (using a Remember/Know procedure). The results revealed that item memory performance was mainly driven by the subjective experience of familiarity, irrespective of whether the objects were previously encoded in emotional or neutral contexts. Correct source memory for the associated background picture, however, was driven by recollection and enhanced when the content was emotional. In ERPs, correctly recognized old objects evoked frontal ERP Old/New effects (300-500ms), irrespective of context category. As in our previous study (Ventura-Bort et al., 2016b), retrieval for objects from emotional contexts was associated with larger parietal Old/New differences (600-800ms), indicating stronger involvement of recollection. Thus, the results suggest a stronger contribution of recollection-based retrieval to item and contextual background source memory for neutral information associated with an emotional event. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interstellar cyanogen and the temperature of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Roth, Katherine C.; Meyer, David M.; Hawkins, Isabel
1993-01-01
We present the results of a recently completed effort to determine the amount of CN rotational excitation in five diffuse interstellar clouds for the purpose of accurately measuring the temperature of the cosmic microwave background radiation (CMBR). In addition, we report a new detection of emission from the strongest hyperfine component of the 2.64 mm CN rotational transition (N = 1-0) in the direction toward HD 21483. We have used this result in combination with existing emission measurements toward our other stars to correct for local excitation effects within diffuse clouds which raise the measured CN rotational temperature above that of the CMBR. After making this correction, we find a weighted mean value of T(CMBR) = 2.729 (+0.023, -0.031) K. This temperature is in excellent agreement with the new COBE measurement of 2.726 +/- 0.010 K (Mather et al., 1993). Our result, which samples the CMBR far from the near-Earth environment, attests to the accuracy of the COBE measurement and reaffirms the cosmic nature of this background radiation. From the observed agreement between our CMBR temperature and the COBE result, we conclude that corrections for local CN excitation based on millimeter emission measurements provide an accurate adjustment to the measured rotational excitation.
A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors
Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca
2012-01-01
Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642
The manufacture of moulded supportive seating for the handicapped.
Nelham, R L
1975-10-01
The wheelchair-bound population often have difficulty in obtaining a correct or comfortable posture in their chairs and sometimes develop pressure sores from long-duration sitting. This problem is being solved by manufacturing personalised, contoured seats which support the patient over the maximum area possible thereby reducing the pressure on the body and the incidence of pressure sores. A cast is obtained of the patient in a comfortable, medically correct posture and from this cast the seat is vacuum formed in thermoplastic materials or hand layed up in glass fibre reinforced resin. Some correction of deformity may be achieved. It is also possible to use the moulded seat in a vehicle.
HST/WFC3: understanding and mitigating radiation damage effects in the CCD detectors
NASA Astrophysics Data System (ADS)
Baggett, S. M.; Anderson, J.; Sosey, M.; Gosmeyer, C.; Bourque, M.; Bajaj, V.; Khandrika, H.; Martlin, C.
2016-07-01
At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel is a 4096x4096 pixel e2v CCD array. While these detectors continue to perform extremely well after more than 7 years in low-earth orbit, the cumulative effects of radiation damage are becoming increasingly evident. The result is a continual increase of the hotpixel population and the progressive loss in charge-transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. In this report, we summarize the radiation damage effects seen in WFC3/UVIS and the evolution of the CTE losses as a function of time, source brightness, and image-background level. In addition, we discuss the available mitigation options, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low-background images for a relatively small noise penalty. Currently, all WFC3 observers are encouraged to consider post-flash for images with low backgrounds. Finally, a pixel-based CTE correction is available for use after the images have been acquired. Similar to the software in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an observationally-defined model of how much charge is captured and released in order to reconstruct the image. As of Feb 2016, the pixel-based CTE correction is part of the automated WFC3 calibration pipeline. Observers with pre-existing data may request their images from MAST (Mikulski Archive for Space Telescopes) to obtain the improved products.
Correcting geometric and photometric distortion of document images on a smartphone
NASA Astrophysics Data System (ADS)
Simon, Christian; Williem; Park, In Kyu
2015-01-01
A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.
2013-01-01
Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry). Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS) was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs) linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation distortion in R. idaeus, which may help to identify deleterious alleles that are the basis of inbreeding depression in the species. PMID:23324311
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schurman, D.L.; Datesman, G.H. Jr; Truitt, J.O.
The report presents a system for evaluating and correcting deficiencies in security-force effectiveness in licensed nuclear facilities. There are four checklists which security managers can copy directly, or can use as guidelines for developing their own checklists. The checklists are keyed to corrective-action guides found in the body of the report. In addition to the corrective-action guides, the report gives background information on the nature of security systems and discussions of various special problems of the licensed nuclear industry.
Electroweak radiative corrections to the top quark decay
NASA Astrophysics Data System (ADS)
Kuruma, Toshiyuki
1993-12-01
The top quark, once produced, should be an important window to the electroweak symmetry breaking sector. We compute electroweak radiative corrections to the decay process t→b+W + in order to extract information on the Higgs sector and to fix the background in searches for a possible new physics contribution. The large Yukawa coupling of the top quark induces a new form factor through vertex corrections and causes discrepancy from the tree-level longitudinal W-boson production fraction, but the effect is of order 1% or less for m H<1 TeV.
NASA Astrophysics Data System (ADS)
Feng, W.; Lemoine, J.-M.; Zhong, M.; Hsu, H. T.
2014-08-01
An annual amplitude of ∼18 cm mass-induced sea level variations (SLV) in the Red Sea is detected from the Gravity Recovery and Climate Experiment (GRACE) satellites and steric-corrected altimetry from 2003 to 2011. The annual mass variations in the region dominate the mean SLV, and generally reach maximum in late January/early February. The annual steric component of the mean SLV is relatively small (<3 cm) and out of phase of the mass-induced SLV. In situ bottom pressure records at the eastern coast of the Red Sea validate the high mass variability observed by steric-corrected altimetry and GRACE. In addition, the horizontal water mass flux of the Red Sea estimated from GRACE and steric-corrected altimetry is validated by hydrographic observations.
46 CFR 113.50-15 - Loudspeakers.
Code of Federal Regulations, 2011 CFR
2011-10-01
.... With the vessel underway in normal conditions, the minimum sound pressure levels for broadcasting emergency announcements must be— (1) In interior spaces, 75 dB(A) or, if the background noise level exceeds 75 dB(A), then at least 20 dB(A) above maximum background noise level; and (2) In exterior spaces, 80...
Barry, Robert L.; Klassen, L. Martyn; Williams, Joy M.; Menon, Ravi S.
2008-01-01
A troublesome source of physiological noise in functional magnetic resonance imaging (fMRI) is due to the spatio-temporal modulation of the magnetic field in the brain caused by normal subject respiration. fMRI data acquired using echo-planar imaging is very sensitive to these respiratory-induced frequency offsets, which cause significant geometric distortions in images. Because these effects increase with main magnetic field, they can nullify the gains in statistical power expected by the use of higher magnetic fields. As a study of existing navigator correction techniques for echo-planar fMRI has shown that further improvements can be made in the suppression of respiratory-induced physiological noise, a new hybrid two-dimensional (2D) navigator is proposed. Using a priori knowledge of the slow spatial variations of these induced frequency offsets, 2D field maps are constructed for each shot using spatial frequencies between ±0.5 cm−1 in k-space. For multi-shot fMRI experiments, we estimate that the improvement of hybrid 2D navigator correction over the best performance of one-dimensional navigator echo correction translates into a 15% increase in the volume of activation, 6% and 10% increases in the maximum and average t-statistics, respectively, for regions with high t-statistics, and 71% and 56% increases in the maximum and average t-statistics, respectively, in regions with low t-statistics due to contamination by residual physiological noise. PMID:18024159
Pritchard, Colin C; Smith, Christina; Salipante, Stephen J; Lee, Ming K; Thornton, Anne M; Nord, Alex S; Gulden, Cassandra; Kupfer, Sonia S; Swisher, Elizabeth M; Bennett, Robin L; Novetsky, Akiva P; Jarvik, Gail P; Olopade, Olufunmilayo I; Goodfellow, Paul J; King, Mary-Claire; Tait, Jonathan F; Walsh, Tom
2012-07-01
Lynch syndrome (hereditary nonpolyposis colon cancer) and adenomatous polyposis syndromes frequently have overlapping clinical features. Current approaches for molecular genetic testing are often stepwise, taking a best-candidate gene approach with testing of additional genes if initial results are negative. We report a comprehensive assay called ColoSeq that detects all classes of mutations in Lynch and polyposis syndrome genes using targeted capture and massively parallel next-generation sequencing on the Illumina HiSeq2000 instrument. In blinded specimens and colon cancer cell lines with defined mutations, ColoSeq correctly identified 28/28 (100%) pathogenic mutations in MLH1, MSH2, MSH6, PMS2, EPCAM, APC, and MUTYH, including single nucleotide variants (SNVs), small insertions and deletions, and large copy number variants. There was 100% reproducibility of detection mutation between independent runs. The assay correctly identified 222 of 224 heterozygous SNVs (99.4%) in HapMap samples, demonstrating high sensitivity of calling all variants across each captured gene. Average coverage was greater than 320 reads per base pair when the maximum of 96 index samples with barcodes were pooled. In a specificity study of 19 control patients without cancer from different ethnic backgrounds, we did not find any pathogenic mutations but detected two variants of uncertain significance. ColoSeq offers a powerful, cost-effective means of genetic testing for Lynch and polyposis syndromes that eliminates the need for stepwise testing and multiple follow-up clinical visits. Copyright © 2012 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Novel ray tracing method for stray light suppression from ocean remote sensing measurements.
Oh, Eunsong; Hong, Jinsuk; Kim, Sug-Whan; Park, Young-Je; Cho, Seong-Ick
2016-05-16
We developed a new integrated ray tracing (IRT) technique to analyze the stray light effect in remotely sensed images. Images acquired with the Geostationary Ocean Color Imager show a radiance level discrepancy at the slot boundary, which is suspected to be a stray light effect. To determine its cause, we developed and adjusted a novel in-orbit stray light analysis method, which consists of three simulated phases (source, target, and instrument). Each phase simulation was performed in a way that used ray information generated from the Sun and reaching the instrument detector plane efficiently. This simulation scheme enabled the construction of the real environment from the remote sensing data, with a focus on realistic phenomena. In the results, even in a cloud-free environment, a background stray light pattern was identified at the bottom of each slot. Variations in the stray light effect and its pattern according to bright target movement were simulated, with a maximum stray light ratio of 8.5841% in band 2 images. To verify the proposed method and simulation results, we compared the results with the real acquired remotely sensed image. In addition, after correcting for abnormal phenomena in specific cases, we confirmed that the stray light ratio decreased from 2.38% to 1.02% in a band 6 case, and from 1.09% to 0.35% in a band 8 case. IRT-based stray light analysis enabled clear determination of the stray light path and candidates in in-orbit circumstances, and the correction process aided recovery of the radiometric discrepancy.
Levofloxacin Pharmacokinetics in Adult Cystic Fibrosis
Lee, Carlton K. K.; Boyle, Michael P.; Diener-West, Marie; Brass-Ernst, Lois; Noschese, Michelle; Zeitlin, Pamela L.
2007-01-01
Background Cystic fibrosis (CF) patients have enhanced renal clearance of aminoglycosides and several β-lactams and require higher dosages. Levofloxacin is a fluoroquinolone with extensive renal elimination and enhanced penetration into lungs and Pseudomonas aeruginosa (PA) biofilms. We studied the preliminary pharmacokinetic and pharmacodynamic (PK/PD) relationship of levofloxacin in CF. Methods Twelve patients at least 18 years old with a mild-to-moderate pulmonary exacerbation and fluoroquinolone-sensitive PA colonization received oral levofloxacin, 500 mg qd, for 14 days. Steady-state serum concentrations were collected after 3 to 7 days, and sputum samples for PA densities were collected before and after levofloxacin. PK/PD relationships for reducing PA sputum densities were evaluated. Results When compared to published data on non-CF patients, CF patients had similar area under the curve for 24 h (AUC24), total clearance, volume of distribution, maximum serum concentration (Cpmax), and elimination half-life: mean, 7.33 μg × h/mL/kg (SD, 1.70); 2.43 mL/min/kg (SD, 0.74); 1.33 L/kg (SD, 0.37); 7.06 μg/mL (SD, 2.35); and 6.44 h (SD, 1.1), respectively. Time to reach maximum serum concentration (Tmax) in CF was longer: mean, 2.20 h (SD, 0.99) vs 1.1 h (SD, 0.4) [p < 0.01]. Preliminary PK/PD analysis failed to demonstrate trends for decreasing PA sputum densities with increasing Cpmax/minimum inhibitory concentration (MIC) ratio and AUC24/MIC ratio. Conclusion CF levofloxacin pharmacokinetics corrected for body weight are similar to non-CF, except for Tmax. Standard levofloxacin dosing (especially monotherapy) is unlikely to produce maximum therapeutic effectiveness. Additional levofloxacin studies in CF are necessary to evaluate its sputum concentrations; the benefits of higher daily dosages (≥ 750 mg); and establish PK/PD targets for managing PA pulmonary infections. PMID:17356095
Constraining continuous rainfall simulations for derived design flood estimation
NASA Astrophysics Data System (ADS)
Woldemeskel, F. M.; Sharma, A.; Mehrotra, R.; Westra, S.
2016-11-01
Stochastic rainfall generation is important for a range of hydrologic and water resources applications. Stochastic rainfall can be generated using a number of models; however, preserving relevant attributes of the observed rainfall-including rainfall occurrence, variability and the magnitude of extremes-continues to be difficult. This paper develops an approach to constrain stochastically generated rainfall with an aim of preserving the intensity-durationfrequency (IFD) relationships of the observed data. Two main steps are involved. First, the generated annual maximum rainfall is corrected recursively by matching the generated intensity-frequency relationships to the target (observed) relationships. Second, the remaining (non-annual maximum) rainfall is rescaled such that the mass balance of the generated rain before and after scaling is maintained. The recursive correction is performed at selected storm durations to minimise the dependence between annual maximum values of higher and lower durations for the same year. This ensures that the resulting sequences remain true to the observed rainfall as well as represent the design extremes that may have been developed separately and are needed for compliance reasons. The method is tested on simulated 6 min rainfall series across five Australian stations with different climatic characteristics. The results suggest that the annual maximum and the IFD relationships are well reproduced after constraining the simulated rainfall. While our presentation focusses on the representation of design rainfall attributes (IFDs), the proposed approach can also be easily extended to constrain other attributes of the generated rainfall, providing an effective platform for post-processing of stochastic rainfall generators.
Electroweak Corrections to pp→μ^{+}μ^{-}e^{+}e^{-}+X at the LHC: A Higgs Boson Background Study.
Biedermann, B; Denner, A; Dittmaier, S; Hofer, L; Jäger, B
2016-04-22
The first complete calculation of the next-to-leading-order electroweak corrections to four-lepton production at the LHC is presented, where all off-shell effects of intermediate Z bosons and photons are taken into account. Focusing on the mixed final state μ^{+}μ^{-}e^{+}e^{-}, we study differential cross sections that are particularly interesting for Higgs boson analyses. The electroweak corrections are divided into photonic and purely weak corrections. The former exhibit patterns familiar from similar W- or Z-boson production processes with very large radiative tails near resonances and kinematical shoulders. The weak corrections are of the generic size of 5% and show interesting variations, in particular, a sign change between the regions of resonant Z-pair production and the Higgs signal.
NASA Astrophysics Data System (ADS)
Hervo, Maxime; Poltera, Yann; Haefele, Alexander
2016-07-01
Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.
[Microhemocirculation and its correction in duodenal ulcer during period of rehabilitation].
Parpibaeva, D A; Zakirkhodzhaev, Sh Ia; Sagatov, T A; Shakirova, D T; Narziev, N M
2009-01-01
The background of this research is to study morphological and functional microcirculatory channel condition with duodenum ulcer in the period of rehabilitation against the background of regular antiulcer therapy (1 group) and further treatment using Vazonit (2 group) in clinical conditions. EDU in animals results in marked microcirculatory disease in duodenum depending on the time of ulcer process development. Hypoxia is to be the significant factor associated with capillary stases, venous congestion. Blood flow impairment in organ results in metabolic damages of tissue structures. The results obtained are evidence of significant correction of microcirculatory channel state, improvement of regeneration and reparation processes. Vazonit improves the disorder of microcirculation and theological blood properties, restoring of macro and microangiopathy changes of hemocirculatory channel.
Apparatus and method for classifying fuel pellets for nuclear reactor
Wilks, Robert S.; Sternheim, Eliezer; Breakey, Gerald A.; Sturges, Jr., Robert H.; Taleff, Alexander; Castner, Raymond P.
1984-01-01
Control for the operation of a mechanical handling and gauging system for nuclear fuel pellets. The pellets are inspected for diameters, lengths, surface flaws and weights in successive stations. The control includes, a computer for commanding the operation of the system and its electronics and for storing and processing the complex data derived at the required high rate. In measuring the diameter, the computer enables the measurement of a calibration pellet, stores that calibration data and computes and stores diameter-correction factors and their addresses along a pellet. To each diameter measurement a correction factor is applied at the appropriate address. The computer commands verification that all critical parts of the system and control are set for inspection and that each pellet is positioned for inspection. During each cycle of inspection, the measurement operation proceeds normally irrespective of whether or not a pellet is present in each station. If a pellet is not positioned in a station, a measurement is recorded, but the recorded measurement indicates maloperation. In measuring diameter and length a light pattern including successive shadows of slices transverse for diameter or longitudinal for length are projected on a photodiode array. The light pattern is scanned electronically by a train of pulses. The pulses are counted during the scan of the lighted diodes. For evaluation of diameter the maximum diameter count and the number of slices for which the diameter exceeds a predetermined minimum is determined. For acceptance, the maximum must be less than a maximum level and the minimum must exceed a set number. For evaluation of length, the maximum length is determined. For acceptance, the length must be within maximum and minimum limits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, X; Kantor, M; Zhu, X
2014-06-01
Purpose: To evaluate the dosimetric accuracy for proton therapy patients with metal implants in CT using metal deletion technique (MDT) artifacts reduction. Methods: Proton dose accuracies under CT metal artifacts were first evaluated using a water phantom with cylindrical inserts of different materials (titanium and steel). Ranges and dose profiles along different beam angles were calculated using treatment planning system (Eclipse version 8.9) on uncorrected CT, MDT CT, and manually-corrected CT, where true Hounsfield units (water) were assigned to the streak artifacts. In patient studies, the treatment plans were developed on manually-corrected CTs, then recalculated on MDT and uncorrected CTs.more » DVH indices were compared between the dose distributions on all the CTs. Results: For water phantom study with 1/2 inch titanium insert, the proton range differences estimated by MDT CT were with 1% for all beam angles, while the range error can be up to 2.6% for uncorrected CT. For the study with 1 inch stainless steel insert, the maximum range error calculated by MDT CT was 1.09% among all the beam angles compared with maximum range error with 4.7% for uncorrected CT. The dose profiles calculated on MDT CTs for both titanium and steel inserts showed very good agreements with the ones calculated on manually-corrected CTs, while large dose discrepancies calculated using uncorrected CTs were observed in the distal end region of the proton beam. The patient study showed similar dose distribution and DVHs for organs near the metal artifacts recalculated on MDT CT compared with the ones calculated on manually-corrected CT, while the differences between uncorrected and corrected CTs were much pronounced. Conclusion: In proton therapy, large dose error could occur due to metal artifact. The MDT CT can be used for proton dose calculation to achieve similar dose accuracy as the current clinical practice using manual correction.« less
Demonstration of Protein-Based Human Identification Using the Hair Shaft Proteome
Leppert, Tami; Anex, Deon S.; Hilmer, Jonathan K.; Matsunami, Nori; Baird, Lisa; Stevens, Jeffery; Parsawar, Krishna; Durbin-Johnson, Blythe P.; Rocke, David M.; Nelson, Chad; Fairbanks, Daniel J.; Wilson, Andrew S.; Rice, Robert H.; Woodward, Scott R.; Bothner, Brian; Hart, Bradley R.; Leppert, Mark
2016-01-01
Human identification from biological material is largely dependent on the ability to characterize genetic polymorphisms in DNA. Unfortunately, DNA can degrade in the environment, sometimes below the level at which it can be amplified by PCR. Protein however is chemically more robust than DNA and can persist for longer periods. Protein also contains genetic variation in the form of single amino acid polymorphisms. These can be used to infer the status of non-synonymous single nucleotide polymorphism alleles. To demonstrate this, we used mass spectrometry-based shotgun proteomics to characterize hair shaft proteins in 66 European-American subjects. A total of 596 single nucleotide polymorphism alleles were correctly imputed in 32 loci from 22 genes of subjects’ DNA and directly validated using Sanger sequencing. Estimates of the probability of resulting individual non-synonymous single nucleotide polymorphism allelic profiles in the European population, using the product rule, resulted in a maximum power of discrimination of 1 in 12,500. Imputed non-synonymous single nucleotide polymorphism profiles from European–American subjects were considerably less frequent in the African population (maximum likelihood ratio = 11,000). The converse was true for hair shafts collected from an additional 10 subjects with African ancestry, where some profiles were more frequent in the African population. Genetically variant peptides were also identified in hair shaft datasets from six archaeological skeletal remains (up to 260 years old). This study demonstrates that quantifiable measures of identity discrimination and biogeographic background can be obtained from detecting genetically variant peptides in hair shaft protein, including hair from bioarchaeological contexts. PMID:27603779
Potential of wind power projects under the Clean Development Mechanism in India
Purohit, Pallav; Michaelowa, Axel
2007-01-01
Background So far, the cumulative installed capacity of wind power projects in India is far below their gross potential (≤ 15%) despite very high level of policy support, tax benefits, long term financing schemes etc., for more than 10 years etc. One of the major barriers is the high costs of investments in these systems. The Clean Development Mechanism (CDM) of the Kyoto Protocol provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at lowest cost that also promotes sustainable development in the host country. Wind power projects could be of interest under the CDM because they directly displace greenhouse gas emissions while contributing to sustainable rural development, if developed correctly. Results Our estimates indicate that there is a vast theoretical potential of CO2 mitigation by the use of wind energy in India. The annual potential Certified Emissions Reductions (CERs) of wind power projects in India could theoretically reach 86 million. Under more realistic assumptions about diffusion of wind power projects based on past experiences with the government-run programmes, annual CER volumes by 2012 could reach 41 to 67 million and 78 to 83 million by 2020. Conclusion The projections based on the past diffusion trend indicate that in India, even with highly favorable assumptions, the dissemination of wind power projects is not likely to reach its maximum estimated potential in another 15 years. CDM could help to achieve the maximum utilization potential more rapidly as compared to the current diffusion trend if supportive policies are introduced. PMID:17663772
The European Alps as an interrupter of the Earth's conductivity structures
NASA Astrophysics Data System (ADS)
Al-Halbouni, D.
2013-07-01
Joint interpretation of magnetotelluric and geomagnetic depth sounding results in the period range of 10-105 s in the Western European Alps offer new insights into the conductivity structure of the Earth's crust and mantle. This first large scale electromagnetic study in the Alps covers a cross-section from Germany to northern Italy and shows the importance of the alpine mountain chain as an interrupter of continuous conductors. Poor data quality due to the highly crystalline underground is overcome by Remote Reference and Robust Processing techniques and the combination of both electromagnetic methods. 3-D forward modeling reveals on the one hand interrupted dipping crustal conductors with maximum conductances of 4960 S and on the other hand a lithosphere thickening up to 208 km beneath the central Western Alps. Graphite networks arising from Palaeozoic sedimentary deposits are considered to be accountable for the occurrence of high conductivity and the distribution pattern of crustal conductors. The influence of huge sedimentary Molasse basins on the electromagnetic data is suggested to be minor compared with the influence of crustal conductors. Dipping direction (S-SE) and maximum angle (10.1°) of the northern crustal conductor reveal the main thrusting conditions beneath the Helvetic Alps whereas the existence of a crustal conductor in the Briançonnais supports theses about its belonging to the Iberian Peninsula. In conclusion the proposed model arisen from combined 3-D modeling of noise corrected electromagnetic data is able to explain the geophysical influence of various structural features in and around the Western European Alps and serves as a background for further upcoming studies.
Characterization and Prediction of the SPI Background
NASA Technical Reports Server (NTRS)
Teegarden, B. J.; Jean, P.; Knodlseder, J.; Skinner, G. K.; Weidenspointer, G.
2003-01-01
The INTEGRAL Spectrometer, like most gamma-ray instruments, is background dominated. Signal-to-background ratios of a few percent are typical. The background is primarily due to interactions of cosmic rays in the instrument and spacecraft. It characteristically varies by +/- 5% on time scales of days. This variation is caused mainly by fluctuations in the interplanetary magnetic field that modulates the cosmic ray intensity. To achieve the maximum performance from SPI it is essential to have a high quality model of this background that can predict its value to a fraction of a percent. In this poster we characterize the background and its variability, explore various models, and evaluate the accuracy of their predictions.
Anomaly-corrected supersymmetry algebra and supersymmetric holographic renormalization
NASA Astrophysics Data System (ADS)
An, Ok Song
2017-12-01
We present a systematic approach to supersymmetric holographic renormalization for a generic 5D N=2 gauged supergravity theory with matter multiplets, including its fermionic sector, with all gauge fields consistently set to zero. We determine the complete set of supersymmetric local boundary counterterms, including the finite counterterms that parameterize the choice of supersymmetric renormalization scheme. This allows us to derive holographically the superconformal Ward identities of a 4D superconformal field theory on a generic background, including the Weyl and super-Weyl anomalies. Moreover, we show that these anomalies satisfy the Wess-Zumino consistency condition. The super-Weyl anomaly implies that the fermionic operators of the dual field theory, such as the supercurrent, do not transform as tensors under rigid supersymmetry on backgrounds that admit a conformal Killing spinor, and their anticommutator with the conserved supercharge contains anomalous terms. This property is explicitly checked for a toy model. Finally, using the anomalous transformation of the supercurrent, we obtain the anomaly-corrected supersymmetry algebra on curved backgrounds admitting a conformal Killing spinor.
LWIR pupil imaging and prospects for background compensation
NASA Astrophysics Data System (ADS)
LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg
2015-08-01
A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.
Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long
2014-09-12
Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
The effect of different calculation methods of flywheel parameters on the Wingate Anaerobic Test.
Coleman, S G; Hale, T
1998-08-01
Researchers compared different methods of calculating kinetic parameters of friction-braked cycle ergometers, and the subsequent effects on calculating power outputs in the Wingate Anaerobic Test (WAnT). Three methods of determining flywheel moment of inertia and frictional torque were investigated, requiring "run-down" tests and segmental geometry. Parameters were used to calculate corrected power outputs from 10 males in a 30-s WAnT against a load related to body mass (0.075 kg.kg-1). Wingate Indices of maximum (5 s) power, work, and fatigue index were also compared. Significant differences were found between uncorrected and corrected power outputs and between correction methods (p < .05). The same finding was evident for all Wingate Indices (p < .05). Results suggest that WAnT must be corrected to give true power outputs and that choosing an appropriate correction calculation is important. Determining flywheel moment of inertia and frictional torque using unloaded run-down tests is recommended.
The influence of the atmosphere on geoid and potential coefficient determinations from gravity data
NASA Technical Reports Server (NTRS)
Rummel, R.; Rapp, R. H.
1976-01-01
For the precise computation of geoid undulations the effect of the attraction of the atmosphere on the solution of the basic boundary value problem of gravimetric geodesy must be considered. This paper extends the theory of Moritz for deriving an atmospheric correction to the case when the undulations are computed by combining anomalies in a cap surrounding the computation point with information derived from potential coefficients. The correction term is a function of the cap size and the topography within the cap. It reaches a value of 3.0 m for a cap size of 30 deg, variations on the decimeter level being caused by variations in the topography. The effect of the atmospheric correction terms on potential coefficients is found to be small, reaching a maximum of 0.0055 millionths at n = 2, m = 2 when terrestrial gravity data are considered. The magnitude of this correction indicates that in future potential coefficient determination from gravity data the atmospheric correction should be made to such data.
ERIC Educational Resources Information Center
Boulard, Garry
2010-01-01
As a collaborative program bringing the instructional resources of Wesleyan University to the maximum security Cheshire Correctional Institute in Connecticut enters its second semester, prison and higher education experts are seeing decreasing support for similar programs across the U.S. According to some experts, state funding on prison education…
ERIC Educational Resources Information Center
Malone, Stephen M.; McGue, Matt; Iacono, William G.
2010-01-01
Background: The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcohol, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with…
Dynamical generation of a repulsive vector contribution to the quark pressure
NASA Astrophysics Data System (ADS)
Restrepo, Tulio E.; Macias, Juan Camilo; Pinto, Marcus Benghi; Ferrari, Gabriel N.
2015-03-01
Lattice QCD results for the coefficient c2 appearing in the Taylor expansion of the pressure show that this quantity increases with the temperature towards the Stefan-Boltzmann limit. On the other hand, model approximations predict that when a vector repulsion, parametrized by GV, is present this coefficient reaches a maximum just after Tc and then deviates from the lattice predictions. Recently, this discrepancy has been used as a guide to constrain the (presently unknown) value of GV within the framework of effective models at large Nc (LN). In the present investigation we show that, due to finite Nc effects, c2 may also develop a maximum even when GV=0 since a vector repulsive term can be dynamically generated by exchange-type radiative corrections. Here we apply the optimized perturbation theory (OPT) method to the two-flavor Polyakov-Nambu-Jona-Lasinio model (at GV=0 ) and compare the results with those furnished by lattice simulations and by the LN approximation at GV=0 and also at GV≠0 . The OPT numerical results for c2 are impressively accurate for T ≲1.2 Tc but, as expected, they predict that this quantity develops a maximum at high T . After identifying the mathematical origin of this extremum we argue that such a discrepant behavior may naturally arise within this type of effective quark theories (at GV=0 ) whenever the first 1 /Nc corrections are taken into account. We then interpret this hypothesis as an indication that beyond the large-Nc limit the correct high-temperature (perturbative) behavior of c2 will be faithfully described by effective models only if they also mimic the asymptotic freedom phenomenon.
Tang, Mariann; Fenger-Eriksen, Christian; Wierup, Per; Greisen, Jacob; Ingerslev, Jørgen; Hjortdal, Vibeke; Sørensen, Benny
2017-06-01
Cardiac surgery may cause a serious coagulopathy leading to increased risk of bleeding and transfusion demands. Blood bank products are commonly first line haemostatic intervention, but has been associated with hazardous side effect. Coagulation factor concentrates may be a more efficient, predictable, and potentially a safer treatment, although prospective clinical trials are needed to further explore these hypotheses. This study investigated the haemostatic potential of ex vivo supplementation of coagulation factor concentrates versus blood bank products on blood samples drawn from patients undergoing cardiac surgery. 30 adults were prospectively enrolled (mean age=63.9, females=27%). Ex vivo haemostatic interventions (monotherapy or combinations) were performed in whole blood taken immediately after surgery and two hours postoperatively. Fresh-frozen plasma, platelets, cryoprecipitate, fibrinogen concentrate, prothrombin complex concentrate (PCC), and recombinant FVIIa (rFVIIa) were investigated. The haemostatic effect was evaluated using whole blood thromboelastometry parameters, as well as by thrombin generation. Immediately after surgery the compromised maximum clot firmness was corrected by monotherapy with fibrinogen or platelets or combination therapy with fibrinogen. At two hours postoperatively the coagulation profile was further deranged as illustrated by a prolonged clotting time, a reduced maximum velocity and further diminished maximum clot firmness. The thrombin lagtime was progressively prolonged and both peak thrombin and endogenous thrombin potential were compromised. No monotherapy effectively corrected all haemostatic abnormalities. The most effective combinations were: fibrinogen+rFVIIa or fibrinogen+PCC. Blood bank products were not as effective in the correction of the coagulopathy. Coagulation factor concentrates appear to provide a more optimal haemostasis profile following cardiac surgery compared to blood bank products. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Choi, Eun-Jin; Jeong, Moon-Taeg; Jang, Seong-Joo; Choi, Nam-Gil; Han, Jae-Bok; Yang, Nam-Hee; Dong, Kyung-Rae; Chung, Woon-Kwan; Lee, Yun-Jong; Ryu, Young-Hwan; Choi, Sung-Hyun; Seong, Kyeong-Jeong
2014-01-01
This study examined whether scanning could be performed with minimum dose and minimum exposure to the patient after an attenuation correction. A Hoffman 3D Brain Phantom was used in BIO_40 and D_690 PET/CT scanners, and the CT dose for the equipment was classified as a low dose (minimum dose), medium dose (general dose for scanning) and high dose (dose with use of contrast medium) before obtaining the image at a fixed kilo-voltage-peak (kVp) and milliampere (mA) that were adjusted gradually in 17-20 stages. A PET image was then obtained to perform an attenuation correction based on an attenuation map before analyzing the dose difference. Depending on tube current in the range of 33-190 milliampere-second (mAs) when BIO_40 was used, a significant difference in the effective dose was observed between the minimum and the maximum mAs (p < 0.05). According to a Scheffe post-hoc test, the ratio of the minimum to the maximum of the effective dose was increased by approximately 5.26-fold. Depending on the change in the tube current in the range of 10-200 mA when D_690 was used, a significant difference in the effective dose was observed between the minimum and the maximum of mA (p < 0.05). The Scheffe posthoc test revealed a 20.5-fold difference. In conclusion, because effective exposure dose increases with increasing operating current, it is possible to reduce the exposure limit in a brain scan can be reduced if the CT dose can be minimized for a transmission scan.
Further Improvement of the RITS Code for Pulsed Neutron Bragg-edge Transmission Imaging
NASA Astrophysics Data System (ADS)
Sato, H.; Watanabe, K.; Kiyokawa, K.; Kiyanagi, R.; Hara, K. Y.; Kamiyama, T.; Furusaka, M.; Shinohara, T.; Kiyanagi, Y.
The RITS code is a unique and powerful tool for a whole Bragg-edge transmission spectrum fitting analysis. However, it has had two major problems. Therefore, we have proposed methods to overcome these problems. The first issue is the difference in the crystallite size values between the diffraction and the Bragg-edge analyses. We found the reason was a different definition of the crystal structure factor. It affects the crystallite size because the crystallite size is deduced from the primary extinction effect which depends on the crystal structure factor. As a result of algorithm change, crystallite sizes obtained by RITS drastically approached to crystallite sizes obtained by Rietveld analyses of diffraction data; from 155% to 110%. The second issue is correction of the effect of background neutrons scattered from a specimen. Through neutron transport simulation studies, we found that the background components consist of forward Bragg scattering, double backward Bragg scattering, and thermal diffuse scattering. RITS with the background correction function which was developed through the simulation studies could well reconstruct various simulated and experimental transmission spectra, but refined crystalline microstructural parameters were often distorted. Finally, it was recommended to reduce the background by improving experimental conditions.
46 CFR 113.50-15 - Loudspeakers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... loudspeakers must be watertight and suitably protected from the effects of the wind and seas. (c) There must be... emergency announcements must be— (1) In interior spaces, 75 dB(A) or, if the background noise level exceeds 75 dB(A), then at least 20 dB(A) above maximum background noise level; and (2) In exterior spaces, 80...
Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.
2012-01-01
We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.
NASA Technical Reports Server (NTRS)
Knight, Montgomery; Harris, Thomas A
1931-01-01
This experimental investigation was conducted primarily for the purpose of obtaining a method of correcting to free air conditions the results of airfoil force tests in four open wind tunnel jets of different shapes. Tests were also made to determine whether the jet boundaries had any appreciable effect on the pitching moments of a complete airplane model. Satisfactory corrections for the effect of the boundaries of the various jets were obtained for all the airfoils tested, the span of the largest being 0.75 of the jet width. The corrections for angle of attack were, in general, larger than those for drag. The boundaries had no appreciable effect on the pitching moments of either the airfoils or the complete airplane model. Increasing turbulence appeared to increase the minimum drag and maximum lift and to decrease the pitching moment.
Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.
Van, Anh T; Hernando, Diego; Sutton, Bradley P
2011-11-01
A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.
NASA Astrophysics Data System (ADS)
Borys, Damian; Serafin, Wojciech; Gorczewski, Kamil; Kijonka, Marek; Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
The aim of this work was to test the most popular and essential algorithms of the intensity nonuniformity correction of the breast MRI imaging. In this type of MRI imaging, especially in the proximity of the coil, the signal is strong but also can produce some inhomogeneities. Evaluated methods of signal correction were: N3, N3FCM, N4, Nonparametric, and SPM. For testing purposes, a uniform phantom object was used to obtain test images using breast imaging MRI coil. To quantify the results, two measures were used: integral uniformity and standard deviation. For each algorithm minimum, average and maximum values of both evaluation factors have been calculated using the binary mask created for the phantom. In the result, two methods obtained the lowest values in these measures: N3FCM and N4, however, for the second method visually phantom was the most uniform after correction.
Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu
2013-06-01
It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.
Control strategy of grid-connected photovoltaic generation system based on GMPPT method
NASA Astrophysics Data System (ADS)
Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen
2018-02-01
There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.
NASA Technical Reports Server (NTRS)
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Xu, Yihua; Pitot, Henry C
2006-03-01
In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.
NASA Astrophysics Data System (ADS)
Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz
2018-04-01
Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.
Loop corrections to primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Boran, Sibel; Kahya, E. O.
2018-02-01
We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.
Is Europa's Subsurface Water Ocean Warm?
NASA Technical Reports Server (NTRS)
Melosh, H. J.; Ekholm, A. G.; Showman, A. P.; Lorenz, R. D.
2002-01-01
Europa's subsurface water ocean may be warm: that is, at the temperature of water's maximum density. This provides a natural explanation of chaos melt-through events and leads to a correct estimate of the age of its surface. Additional information is contained in the original extended abstract.
46 CFR 196.85-1 - Magazine operation and control.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OCEANOGRAPHIC RESEARCH VESSELS... shall be inspected daily. Magazine inspection results and corrective action, when taken, shall be noted in the ship's log daily. Maximum and minimum temperatures for the previous 24-hour period shall be...
TMDLS: AFTER POINT SOURCES, WHAT CAN WE DO NEXT?
Section 303(d) of the Clean Water Act required TMDLs (total maximum daily loads) for all waters for which effluent or point source limitations are insufficient to meet water quality standards. Concerns may arise regarding the manner by which TMDLs are established, the corrective ...
NASA Astrophysics Data System (ADS)
Awwaluddin, Muhammad; Kristedjo, K.; Handono, Khairul; Ahmad, H.
2018-02-01
This analysis is conducted to determine the effects of static and dynamic loads of the structure of mechanical system of Ultrasonic Scanner i.e., arm, column, and connection systems for inservice inspection of research reactors. The analysis is performed using the finite element method with 520 N static load. The correction factor of dynamic loads used is the Gerber mean stress correction (stress life). The results of the analysis show that the value of maximum equivalent von Mises stress is 1.3698E8 Pa for static loading and value of the maximum equivalent alternating stress is 1.4758E7 Pa for dynamic loading. These values are below the upper limit allowed according to ASTM A240 standards i.e. 2.05E8 Pa. The result analysis of fatigue life cycle are at least 1E6 cycle, so it can be concluded that the structure is in the high life cycle category.
Milan, M A; McKee, J M
1976-01-01
Two experiments were conducted (1) to explore the application of token reinforcement procedures in a maximum security correctional institution for adult male felons and (2) to determine to what extent the reinforcement procedures disrupted the day-to-day lives of inmate participants. In Experiment 1, an expanded reversal design revealed that the combination of praise and token reinforcement was more effective than the combinations of praise and noncontingent token award or direct commands on four common institutional activities. The latter two combinations were not found to be any more effective than praise alone. Experiment 2, which also employed a reversal design, indicated that the high levels of performance observed during the token reinforcement phases of Experiment 1 could be attained without subjecting participants to undue hardship in the form of increased deprivation of either social intercourse or the opportunity to engage in recreational and entertainment activities. Client safeguards are discussed in detail. PMID:977516
Pliocene shorelines and the deformation of passive margins.
NASA Astrophysics Data System (ADS)
Rovere, Alessio; Raymo, Maureen; Austermann, Jacqueline; Mitrovica, Jerry; Janßen, Alexander
2016-04-01
Characteristic geomorphology described from three Pliocene scarps in Rovere et al. [2014] was used to guide a global search for additional Pliocene age scarps that could be used to document former Pliocene shoreline locations. Each of the Rovere et al. [2014] paleo-shorelines was measured at the scarp toe abutting a flat coastal plain. In this study, nine additional such scarp-toe paleo-shorelines were identified. Each of these scarps has been independently dated to the Plio-Pleistocene; however, they were never unified by a single formation mechanism. Even when corrected for Glacial Isostatic Adjustment post-depositional effects, Post-Pliocene deformation of the inferred shorelines precludes a direct assessment of maximum Pliocene sea level height at the scarp toes. However, careful interpretation of the processes at the inferred paleo-shoreline suggests specific amplitudes of dynamic topography at each location, which could lead to a corrected maximum sea level height and provide a target dataset with which to compare dynamic topography model output.
Effect of Background Pressure on the Plasma Oscillation Characteristics of the HiVHAc Hall Thruster
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Kamhawi, Hani; Lobbia, Robert B.; Brown, Daniel L.
2014-01-01
During a component compatibility test of the NASA HiVHAc Hall thruster, a high-speed camera and a set of high-speed Langmuir probes were implemented to study the effect of varying facility background pressure on thruster operation. The results show a rise in the oscillation frequency of the breathing mode with rising background pressure, which is hypothesized to be due to a shortening accelerationionization zone. An attempt is made to apply a simplified ingestion model to the data. The combined results are used to estimate the maximum acceptable background pressure for performance and wear testing.
Long-term variations in the gamma-ray background on SMM
NASA Technical Reports Server (NTRS)
Kurfess, J. D.; Share, G. H.; Kinzer, R. L.; Johnson, W. N.; Adams, J. H., Jr.
1989-01-01
Long-term temporal variations in the various components of the background radiation detected by the gamma-ray spectrometer on the Solar Maximum Mission are presented. The SMM gamma-ray spectrometer was launched in February, 1980 and continues to operate normally. The extended period of mission operations has provided a large data base in which it is possible to investigate a variety of environmental and instrumental background effects. In particular, several effects associated with orbital precession are introduced and discussed.
Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery
NASA Technical Reports Server (NTRS)
Le Vie, Lisa R.
2016-01-01
Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.
Yamaguchi, Shotaro; Wagatsuma, Kei; Miwa, Kenta; Ishii, Kenji; Inoue, Kazumasa; Fukushi, Masahiro
2018-03-01
The Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear. Spheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [ 18 F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [ 18 F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200-800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured. The 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot. BPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ma, Tianyu; Shao, Yiping
2008-08-01
This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.
RSA and its Correctness through Modular Arithmetic
NASA Astrophysics Data System (ADS)
Meelu, Punita; Malik, Sitender
2010-11-01
To ensure the security to the applications of business, the business sectors use Public Key Cryptographic Systems (PKCS). An RSA system generally belongs to the category of PKCS for both encryption and authentication. This paper describes an introduction to RSA through encryption and decryption schemes, mathematical background which includes theorems to combine modular equations and correctness of RSA. In short, this paper explains some of the maths concepts that RSA is based on, and then provides a complete proof that RSA works correctly. We can proof the correctness of RSA through combined process of encryption and decryption based on the Chinese Remainder Theorem (CRT) and Euler theorem. However, there is no mathematical proof that RSA is secure, everyone takes that on trust!.
Delegation in Correctional Nursing Practice.
Tompkins, Frances
2016-07-01
Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. © The Author(s) 2016.
Silva-Rodríguez, Jesús; Aguiar, Pablo; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Alvaro
2014-05-01
Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manualmore » ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.« less
Study of boundary-layer transition using transonic cone Preston tube data
NASA Technical Reports Server (NTRS)
Reed, T. D.; Abu-Mostafa, A.
1982-01-01
Laminar layer Preston tube data on a sharp nose, ten degree cone obtained in the Ames 11 ft TWT and in flight tests are analyzed. During analyses of the laminar-boundary layer data, errors were discovered in both the wind tunnel and the flight data. A correction procedure for errors in the flight data is recommended which forces the flight data to exhibit some of the orderly characteristics of the wind tunnel data. From corrected wind tunnel data, a correlation is developed between Preston tube pressures and the corresponding values of theoretical laminar skin friction. Because of the uncertainty in correcting the flight data, a correlation for the unmodified data is developed, and, in addition, three other correlations are developed based on different correction procedures. Each of these correlations are used in conjunction with the wind tunnel correlation to define effective freestream unit Reynolds numbers for the 11 ft TWT over a Mach number range of 0.30 to 0.95. The maximum effective Reynolds numbers are approximately 6.5% higher than the normal values. These maximum values occur between freestream Mach numbers of 0.60 and 0.80. Smaller values are found outside this Mach number range. These results indicate wind tunnel noise affects the average laminar skin friction much less than it affects boundary layer transition. Data on the onset, extent, and end of boundary layer transition are summarized. Application of a procedure for studying the relative effects of varying nose radius on a ten degree cone at supercritical speeds indicates that increasing nose radius promotes boundary layer transition and separation of laminar boundary layers.
Ge, Nan; Chevalier, Stéphane; Hinebaugh, James; Yip, Ronnie; Lee, Jongmin; Antonacci, Patrick; Kotaka, Toshikazu; Tabuchi, Yuichiro; Bazylak, Aimy
2016-03-01
Synchrotron X-ray radiography, due to its high temporal and spatial resolutions, provides a valuable means for understanding the in operando water transport behaviour in polymer electrolyte membrane fuel cells. The purpose of this study is to address the specific artefact of imaging sample movement, which poses a significant challenge to synchrotron-based imaging for fuel cell diagnostics. Specifically, the impact of the micrometer-scale movement of the sample was determined, and a correction methodology was developed. At a photon energy level of 20 keV, a maximum movement of 7.5 µm resulted in a false water thickness of 0.93 cm (9% higher than the maximum amount of water that the experimental apparatus could physically contain). This artefact was corrected by image translations based on the relationship between the false water thickness value and the distance moved by the sample. The implementation of this correction method led to a significant reduction in false water thickness (to ∼0.04 cm). Furthermore, to account for inaccuracies in pixel intensities due to the scattering effect and higher harmonics, a calibration technique was introduced for the liquid water X-ray attenuation coefficient, which was found to be 0.657 ± 0.023 cm(-1) at 20 keV. The work presented in this paper provides valuable tools for artefact compensation and accuracy improvements for dynamic synchrotron X-ray imaging of fuel cells.
Measurement of the Forward-Backward Asymmetry in the Production of B+/- Mesons In pp Collisions
NASA Astrophysics Data System (ADS)
Hogan, Julie M.
We present a measurement of the forward-backward asymmetry in the production of B+ mesons, A_FB(B+), using B+ -> J/psi K+ decays in 10.4 inverse femtobarns of p p-bar collisions at sqrt(s) = 1.96 TeV collected by the D0 experiment during Run II of the Tevatron collider. A nonzero asymmetry would indicate a preference for a particular flavor, i.e., b quark or b-bar antiquark, to be produced in the direction of the proton beam. We extract A_FB(B+) from a maximum likelihood fit to the difference between the numbers of forward- and backward-produced B+ mesons, using a boosted decision tree to reduce background. Corrections are made for reconstruction asymmetries of the decay products. We measure an asymmetry consistent with zero: A_FB(B+) = [-0.24 +/- 0.41(stat) +/- 0.19(syst)]%. The standard model estimate from next-to-leading-order Monte Carlo is A_FB(B+) = [2.31 +/- 0.34(stat.) +/- 0.51(syst.)]%. There is a difference of approximately 3 standard deviations between this prediction and our result, which suggests that more rigorous determination of the standard model prediction is needed to interpret these results.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Lee, Sunmin; Zhai, Shumenghui; Zhang, Guo (Yolanda); Ma, Xiang S; Lu, Xiaoxiao; Tan, Yin; Siu, Philip; Seals, Brenda; Ma, Grace X
2015-01-01
BACKGROUND Hepatitis C virus (HCV) is a major cause of chronic liver disease and cancer. Vietnamese Americans are at high risk of HCV infection, with men having the highest US incidence of liver cancer. This study examines an intervention to improve HCV knowledge among Vietnamese Americans. STUDY Seven Vietnamese community-based organizations in Pennsylvania and New Jersey recruited a total of 306 Vietnamese participants from 2010 to 2011. RESULTS Average knowledge scores for pretest and posttest were 3.32 and 5.88, respectively (maximum 10). After adjusting for confounding variables, age and higher education were positively associated with higher pretest scores and having a physician who spoke English or Vietnamese was negatively associated with higher pretest scores. Additionally, after adjusting for confounding variables, household income, education, and having an HCV-infected family member significantly increased knowledge scores. CONCLUSIONS Promotion and development of HCV educational programs can increase HCV knowledge among race and ethnic groups, such as Vietnamese Americans. Giving timely information to at-risk groups provides the opportunity to correct misconceptions, decrease HCV risk behaviors, and encourage testing that might improve timely HCV diagnosis and treatment. PMID:26561280
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang Bizhu; Zhang Shuangnan; Lieu, Richard
2010-01-01
The spectral variation of the cosmic microwave background (CMB) as observed by WMAP was tested using foreground reduced WMAP5 data, by producing subtraction maps at the 1 deg. angular resolution between the two cosmological bands of V and W, for masked sky areas that avoid the Galactic disk. The resulting V - W map revealed a non-acoustic signal over and above the WMAP5 pixel noise, with two main properties. First, it possesses quadrupole power at the approx1 muK level which may be attributed to foreground residuals. Second, it fluctuates also at all values of l> 2, especially on the 1more » deg. scale (200 approx< l approx< 300). The behavior is random and symmetrical about zero temperature with an rms approx7 muK, or 10% of the maximum CMB anisotropy, which would require a 'cosmic conspiracy' among the foreground components if it is a consequence of their existence. Both anomalies must be properly diagnosed and corrected if 'precision' cosmology is the claim. The second anomaly is, however, more interesting because it opens the question on whether the CMB anisotropy genuinely represents primordial density seeds.« less
Flight Calibration of the LROC Narrow Angle Camera
NASA Astrophysics Data System (ADS)
Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.
2016-04-01
Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.
NASA Astrophysics Data System (ADS)
Maelger, J.; Reinosa, U.; Serreau, J.
2018-04-01
We extend a previous investigation [U. Reinosa et al., Phys. Rev. D 92, 025021 (2015), 10.1103/PhysRevD.92.025021] of the QCD phase diagram with heavy quarks in the context of background field methods by including the two-loop corrections to the background field effective potential. The nonperturbative dynamics in the pure-gauge sector is modeled by a phenomenological gluon mass term in the Landau-DeWitt gauge-fixed action, which results in an improved perturbative expansion. We investigate the phase diagram at nonzero temperature and (real or imaginary) chemical potential. Two-loop corrections yield an improved agreement with lattice data as compared to the leading-order results. We also compare with the results of nonperturbative continuum approaches. We further study the equation of state as well as the thermodynamic stability of the system at two-loop order. Finally, using simple thermodynamic arguments, we show that the behavior of the Polyakov loops as functions of the chemical potential complies with their interpretation in terms of quark and antiquark free energies.
ForCent model development and testing using the Enriched Background Isotope Study experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parton, W.J.; Hanson, P. J.; Swanston, C.
The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulatesmore » the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less
ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parton, William; Hanson, Paul J; Swanston, Chris
The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamicsmore » of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less
Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA
NASA Astrophysics Data System (ADS)
Donovan, J.; Singer, J.; Armstrong, J. T.
2016-12-01
Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.
Stochastic Background from Coalescences of Neutron Star-Neutron Star Binaries
NASA Astrophysics Data System (ADS)
Regimbau, T.; de Freitas Pacheco, J. A.
2006-05-01
In this work, numerical simulations were used to investigate the gravitational stochastic background produced by coalescences of double neutron star systems occurring up to z~5. The cosmic coalescence rate was derived from Monte Carlo methods using the probability distributions for massive binaries to form and for a coalescence to occur in a given redshift. A truly continuous background is produced by events located only beyond the critical redshift z*=0.23. Events occurring in the redshift interval 0.027
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. M. Fitzmaurice
2001-04-01
The purpose of this Closure Report (CR) is to provide documentation of the completed corrective action at the Test Cell A Leachfield System and to provide data confirming the corrective action. The Test Cell A Leachfield System is identified in the Federal Facility Agreement and Consent Order (FFACO) of 1996 as Corrective Action Unit (CAU) 261. Remediation of CAU 261 is required under the FFACO (1996). CAU 261 is located in Area 25 of the Nevada Test Site (NTS) which is approximately 140 kilometers (87 miles) northwest of Las Vegas, Nevada (Figure 1). CAU 261 consists of two Corrective Actionmore » Sites (CASS): CAS 25-05-01, Leachfield; and CAS 25-05-07, Acid Waste Leach Pit (AWLP) (Figures 2 and 3). Test Cell A was operated during the 1960s and 1970s to support the Nuclear Rocket Development Station. Various operations within Building 3124 at Test Cell A resulted in liquid waste releases to the Leachfield and the AWLP. The following existing site conditions were reported in the Corrective Action Decision Document (CADD) (U.S. Department of Energy, Nevada Operations Office [DOE/NV], 1999): Soil in the leachfield was found to exceed the Nevada Division of Environmental Protection (NDEP) Action Level for petroleum hydrocarbons, the U.S. Environmental Protection Agency (EPA) preliminary remediation goals for semi volatile organic compounds, and background concentrations for strontium-90; Soil below the sewer pipe and approximately 4.5 meters (m) (15 feet [ft]) downstream of the initial outfall was found to exceed background concentrations for cesium-137 and strontium-90; Sludge in the leachfield septic tank was found to exceed the NDEP Action Level for petroleum hydrocarbons and to contain americium-241, cesium-137, uranium-234, uranium-238, potassium-40, and strontium-90; No constituents of concern (COC) were identified at the AWLP. The NDEP-approved CADD (DOWNV, 1999) recommended Corrective Action Alternative 2, ''Closure of the Septic Tank and Distribution Box, Partial Excavation, and Administrative Controls.'' The corrective action was performed following the NDEP-approved Corrective Action Plan (CAP) (DOE/NV, 2000).« less
Evaluation of an improved fiberoptics luminescence skin monitor with background correction.
Vo-Dinh, T
1987-06-01
In this work, an improved version of a fiberoptics luminescence monitor, the prototype luminoscope II, is evaluated for in situ quantitative measurements. The instrument was developed to detect traces of luminescing organic contaminants on skin. An electronic background-nulling system was designed and incorporated into the instrument to compensate for various skin background emissions. A dose-response curve for a coal liquid spotted on mouse skin was established. The results illustrated the usefulness of the instrument for in vivo detection of organic materials on laboratory mouse skin.
Regional and local background ozone in Houston during Texas Air Quality Study 2006
NASA Astrophysics Data System (ADS)
Langford, A. O.; Senff, C. J.; Banta, R. M.; Hardesty, R. M.; Alvarez, R. J.; Sandberg, Scott P.; Darby, Lisa S.
2009-04-01
Principal Component Analysis (PCA) is used to isolate the common modes of behavior in the daily maximum 8-h average ozone mixing ratios measured at 30 Continuous Ambient Monitoring Stations in the Houston-Galveston-Brazoria area during the Second Texas Air Quality Study field intensive (1 August to 15 October 2006). Three principal components suffice to explain 93% of the total variance. Nearly 84% is explained by the first component, which is attributed to changes in the "regional background" determined primarily by the large-scale winds. The second component (6%) is attributed to changes in the "local background," that is, ozone photochemically produced in the Houston area and spatially and temporally averaged by local circulations. Finally, the third component (3.5%) is attributed to short-lived plumes containing high ozone originating from industrial areas along Galveston Bay and the Houston Ship Channel. Regional background ozone concentrations derived using the first component compare well with mean ozone concentrations measured above the Gulf of Mexico by the tunable profiler for aerosols and ozone lidar aboard the NOAA Twin Otter. The PCA regional background values also agree well with background values derived using the lowest daily 8-h maximum method of Nielsen-Gammon et al. (2005), provided the Galveston Airport data (C34) are omitted from that analysis. The differences found when Galveston is included are caused by the sea breeze, which depresses ozone at Galveston relative to sites further inland. PCA removes the effects of this and other local circulations to obtain a regional background value representative of the greater Houston area.
Yao, Bo; Zhou, Ling-Xi; Liu, Zhao; Zhang, Gen; Xia, Ling-Jun
2014-07-01
An in-situ GC-ECD monitoring system was established at the Shangdianzi GAW regional background station (SDZ) for a 2-year atmospheric methyl chloroform (CH3CCl3) measurement experiment. Robust extraction of baseline signal filter was applied to the CH3CCl3 time series to separate the background and pollution data. The yearly averaged background mixing ratios of atmospheric CH3CCl3 were (9.03 +/- 0.53) x 10(-12) mol x mol(-1) in 2009 and (7.73 +/- 0.47) x 10(-12) in 2010, and the percentages of the background data in the whole data were 61.1% in 2009 and 60.4% in 2010, respectively. The yearly background CH3CCl3 mixing ratios at SDZ were consistent with the northern hemisphere background levels observed at Mace Head and Trinidad Head stations, but lower than the results observed at sites in southern China and some Chinese cities from 2001 to 2005. During the study period, background mixing ratios trends exhibited a decreasing rate of 1.39 x 10 12(-12) a(-1). The wind direction with the maximum CH3CCl3 mixing ratio was from the southwest sector and that with the minimum ratio was from the northeast sector. The differences between the maximum and the minimum average mixing ratios in the 16 wind directions were 0.77 x 10(-12) (2009) and 0.52 x 10(-12) (2010). In the 16 different wind directions, the averaged mixing ratio of CH3CCl3 in 2010 was lower than that in 2009 by 1.03 x 10(-12) -1.68 x 10(-12).
Concentrating Solar Power Projects - Holaniku at Keahole Point |
: Currently Non-Operational Start Year: 2009 Do you have more information, corrections, or comments ? Background Technology: Parabolic trough Status: Currently Non-Operational Country: United States City
Seasonal and diurnal variations of ozone at a high-altitude mountain baseline station in East Asia
NASA Astrophysics Data System (ADS)
Ou Yang, Chang-Feng; Lin, Neng-Huei; Sheu, Guey-Rong; Lee, Chung-Te; Wang, Jia-Lin
2012-01-01
Continuous measurements of tropospheric ozone were conducted at the Lulin Atmospheric Background Station (LABS) at an altitude of 2862 m from April 2006 to the end of 2009. Distinct seasonal variations in the ozone concentration were observed at the LABS, with a springtime maximum and a summertime minimum. Based on a backward trajectory analysis, CO data, and ozonesondes, the springtime maximum was most likely caused by the long-range transport of air masses from Southeast Asia, where biomass burning was intense in spring. In contrast, a greater Pacific influence contributed to the summertime minimum. In addition to seasonal variations, a distinct diurnal pattern was also observed at the LABS, with a daytime minimum and a nighttime maximum. The daytime ozone minimum was presumably caused by sinks of dry deposition and NO titration during the up-slope transport of surface air. The higher nighttime values, however, could be the result of air subsidence at night bringing ozone aloft to the LABS. After filtering out the daytime data to remove possible local surface contributions, the average background ozone value for the period of 2006-2009 was approximately 36.6 ppb, increased from 32.3 ppb prior to data filtering, without any changes in the seasonal pattern. By applying HYSPLIT4 model analysis, the origins of the air masses contributing to the background ozone observed at the LABS were investigated.
NASA Astrophysics Data System (ADS)
Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi
2016-06-01
A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.
Initiation of a Relativistic Magnetron
NASA Astrophysics Data System (ADS)
Kaup, D. J.
2003-10-01
We report on recent results in our studies of relativistic magnetrons. Experimentally, these devices have proven to be very difficult to operate, typically cutting off too quickly after they are initialized, and therefore not delivering the power levels expected [1]. Our analysis is based on our model of a crossed-field device, consisting only of its two dominant modes, a DC background and an RF oscillating mode [2]. This approach has produced generally quantitatively correct values for the operating regime and major features of nonrelativistic devices. We have performed a fully electromagnetic, relativistic analysis of a magnetron of the A6 cylindrical configuration. We will show that when the device should generate maximum power, it enters a regime where the DC background could become potentially unstable. In particular, when a nonrelativistic planar device enters the saturation regime, the DC electron density distribution could become unstable if the vertical DC velocity would ever become equal to the magnitude of the vertical RF velocity [3]. We find that during the initiation phase, for the highest power levels of our model of the A6, near the cathode, the DC vertical velocity does become just less than, and definitely on the order of the magnitude of the vertical RF velocity. Consequently, any localized surge in the currents near the cathode, could easily destroy the smooth upward flow of the electrons, drive the DC background unstable, and thereby shut down the operation of the device. [1] Long-pulse relativistic magnetron experiments, M.R. Lopez, R.M. Gilgenbach, Y.Y. Lau, D.W. Jordan, M.D. Johnston, M.C. Jones, V.B. Neculaes, T.A. Spencer, J.W. Luginsland, M.D. Haworth, R.W.Lemke, D. Price, and L. Ludeking, Proc. of SPIE Aerosense 4720, 10-17, (2002). [2] Theoretical modeling of crossed-field electron vacuum devices, D.J. Kaup, Phys. of Plasmas 8, 2473-80 (2001). [3] Initiation and Stationary Operating States in a Crossed-Field Vacuum Electron Device, D. J. Kaup, Proc. of SPIE Aerosense 4720, 67-74, (2002).
Influence of slice overlap on positron emission tomography image quality
NASA Astrophysics Data System (ADS)
McKeown, Clare; Gillen, Gerry; Dempsey, Mary Frances; Findlay, Caroline
2016-02-01
PET scans use overlapping acquisition beds to correct for reduced sensitivity at bed edges. The optimum overlap size for the General Electric (GE) Discovery 690 has not been established. This study assesses how image quality is affected by slice overlap. Efficacy of 23% overlaps (recommended by GE) and 49% overlaps (maximum possible overlap) were specifically assessed. European Association of Nuclear Medicine (EANM) guidelines for calculating minimum injected activities based on overlap size were also reviewed. A uniform flood phantom was used to assess noise (coefficient of variation, (COV)) and voxel accuracy (activity concentrations, Bq ml-1). A NEMA (National Electrical Manufacturers Association) body phantom with hot/cold spheres in a background activity was used to assess contrast recovery coefficients (CRCs) and signal to noise ratios (SNR). Different overlap sizes and sphere-to-background ratios were assessed. COVs for 49% and 23% overlaps were 9% and 13% respectively. This increased noise was difficult to visualise on the 23% overlap images. Mean voxel activity concentrations were not affected by overlap size. No clinically significant differences in CRCs were observed. However, visibility and SNR of small, low contrast spheres (⩽13 mm diameter, 2:1 sphere to background ratio) may be affected by overlap size in low count studies if they are located in the overlap area. There was minimal detectable influence on image quality in terms of noise, mean activity concentrations or mean CRCs when comparing 23% overlap with 49% overlap. Detectability of small, low contrast lesions may be affected in low count studies—however, this is a worst-case scenario. The marginal benefits of increasing overlap from 23% to 49% are likely to be offset by increased patient scan times. A 23% overlap is therefore appropriate for clinical use. An amendment to EANM guidelines for calculating injected activities is also proposed which better reflects the effect overlap size has on image noise.
Barrenechea, Gabriel R; Burman, Erik; Karakatsani, Fotini
2017-01-01
For the case of approximation of convection-diffusion equations using piecewise affine continuous finite elements a new edge-based nonlinear diffusion operator is proposed that makes the scheme satisfy a discrete maximum principle. The diffusion operator is shown to be Lipschitz continuous and linearity preserving. Using these properties we provide a full stability and error analysis, which, in the diffusion dominated regime, shows existence, uniqueness and optimal convergence. Then the algebraic flux correction method is recalled and we show that the present method can be interpreted as an algebraic flux correction method for a particular definition of the flux limiters. The performance of the method is illustrated on some numerical test cases in two space dimensions.
Self-Injurious Behavior in Correctional and Noncorrectional Psychiatric Patients.
ERIC Educational Resources Information Center
Hillbrand, Marc
1993-01-01
Examined prevalence and selected correlates of self-injurious behavior (SIB) among inmates (n=23) referred for treatment to maximum-security forensic hospital, non-SIB inmates (n=23), and noncorrections SIB patients (n=30). Found two distinct patterns of SIB: pattern consistent with conceptualization of SIB as expression of generalized behavioral…
ERIC Educational Resources Information Center
DeHart, Dana
2010-01-01
This report describes process and outcome evaluation of an innovative program based in a women's maximum-security correctional facility. Methodology included review of program materials, unobtrusive observation of group process, participant evaluation forms, focus groups, and individual interviews with current and former program participants.…
Environmental Protection Agency rules stipulate that corrective action be taken for drinking water distribution systems that exceed the maximum contaminant level (MCL) for total Trihalomethanes (TTHMs) 80μg/L. Real-time, or even periodic, monitoring of drinking water i...
14 CFR 125.407 - Maintenance log: Airplanes.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Maintenance log: Airplanes. 125.407 Section... OPERATIONS: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6... Maintenance log: Airplanes. (a) Each person who takes corrective action or defers action concerning a reported...
14 CFR 125.407 - Maintenance log: Airplanes.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Maintenance log: Airplanes. 125.407 Section... OPERATIONS: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6... Maintenance log: Airplanes. (a) Each person who takes corrective action or defers action concerning a reported...
14 CFR 125.407 - Maintenance log: Airplanes.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Maintenance log: Airplanes. 125.407 Section... OPERATIONS: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6... Maintenance log: Airplanes. (a) Each person who takes corrective action or defers action concerning a reported...
14 CFR 125.407 - Maintenance log: Airplanes.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Maintenance log: Airplanes. 125.407 Section... OPERATIONS: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6... Maintenance log: Airplanes. (a) Each person who takes corrective action or defers action concerning a reported...
78 FR 60745 - Hazardous Materials: Minor Editorial Corrections and Clarifications (RRR)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... 173.62 This section provides packaging instructions for Class 1 explosive materials. Paragraph (b) of... requirements for approval of special form Class 7 (radioactive) materials. Paragraph (d) of this section notes... activity of special form Class 7 (radioactive) material permitted in a Type A package equals the maximum...
14 CFR 125.407 - Maintenance log: Airplanes.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Maintenance log: Airplanes. 125.407 Section... OPERATIONS: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6... Maintenance log: Airplanes. (a) Each person who takes corrective action or defers action concerning a reported...
Poster — Thur Eve — 72: Clinical Subtleties of Flattening-Filter-Free Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corns, Robert; Thomas, Steven; Huang, Vicky
2014-08-15
Flattening-filter-free (fff) beams offer superior dose rates, reducing treatment times for important techniques that utilize small field sizes, such as stereotactic ablative radiotherapy (SABR). The impact of ion collection efficiency (P{sub ion}) on the percent depth dose (PDD) has been discussed at length in the literature. Relative corrections of the order of l%–2% are possible. In the process of commissioning 6fff and 10fff beams, we identified a number of other important details that influence commissioning. We looked at the absolute dose difference between corrected and uncorrected PDD. We discovered a curve with a broad maximum between 10 and 20 cm.more » We wondered about the consequences of this PDD correction on the absolute dose calibration of the linac because the TG-51 protocol does not correct the PDD curve. The quality factor k{sub Q} depends on the PDD, so in principle, a correction to the PDD will alter the absolute calibration of the linac. Finally, there are other clinical tables, such as TMR, which are derived from PDD. Attention to details on how this computation is performed is important because different corrections are possible depending the method of calculation.« less
NASA Astrophysics Data System (ADS)
Johnson, Jennifer E.; Rella, Chris W.
2017-08-01
Cavity ring-down spectrometers have generally been designed to operate under conditions in which the background gas has a constant composition. However, there are a number of observational and experimental situations of interest in which the background gas has a variable composition. In this study, we examine the effect of background gas composition on a cavity ring-down spectrometer that measures δ18O-H2O and δ2H-H2O values based on the amplitude of water isotopologue absorption features around 7184 cm-1 (L2120-i, Picarro, Inc.). For background mixtures balanced with N2, the apparent δ18O values deviate from true values by -0.50 ± 0.001 ‰ O2 %-1 and -0.57 ± 0.001 ‰ Ar %-1, and apparent δ2H values deviate from true values by 0.26 ± 0.004 ‰ O2 %-1 and 0.42 ± 0.004 ‰ Ar %-1. The artifacts are the result of broadening, narrowing, and shifting of both the target absorption lines and strong neighboring lines. While the background-induced isotopic artifacts can largely be corrected with simple empirical or semi-mechanistic models, neither type of model is capable of completely correcting the isotopic artifacts to within the inherent instrument precision. The development of strategies for dynamically detecting and accommodating background variation in N2, O2, and/or Ar would facilitate the application of cavity ring-down spectrometers to a new class of observations and experiments.
On the Mechanism for a Gravity Effect Using Type 2 Superconductors
NASA Technical Reports Server (NTRS)
Robertson, Glen A.
1999-01-01
In this paper, we formulate a percent mass change equation based on Woodward's transient mass shift and the Cavendish balance equations applied to superconductor Josephson junctions, A correction to the transient mass shift equation is presented due to the emission of the mass energy from the superconductor. The percentage of mass change predicted by the equation was estimated against the maximum percent mass change reported by Podkletnov in his gravity shielding experiments. An experiment is then discussed, which could shed light on the transient mass shift near superconductor and verify the corrected gravitational potential.
Martins, E W; Potiens, M P A
2012-07-01
This paper presents the establishment of a quality control program and correction factors for the geometry of the vials used for distribution of radiopharmaceutical and activimeters calibration. The radiopharmaceutical produced by IPEN 67Ga, 131I, 201Tl and 99mTc had been tested using two different vials. Results show a maximum variation of 22% for 201Tl, and the minimum variation was 2.98% for 131I. The correction factors must be incorporated in the routine calibration of the activimeters. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hack, Laura M.; Kalsi, Gursharan; Aliev, Fazil; Kuo, Po-Hsiu; Prescott, Carol A.; Patterson, Diana G.; Walsh, Dermot; Dick, Danielle M.; Riley, Brien P.; Kendler, Kenneth S.
2012-01-01
Background Over 50 years of evidence from research has established that the central dopaminergic reward pathway is likely involved in alcohol dependence (AD). Additional evidence supports a role for dopamine (DA) in other disinhibitory psychopathology, which is often comorbid with AD. Family and twin studies demonstrate that a common genetic component accounts for most of the genetic variance in these traits. Thus, DA-related genes represent putative candidates for the genetic risk that underlies not only AD but also behavioral disinhibition. Many linkage and association studies have examined these relationships with inconsistent results, possibly because of low power, poor marker coverage, and/or an inappropriate correction for multiple testing. Methods We conducted an association study on the products encoded by 10 DA-related genes (DRD1-D5, SLC18A2, SLC6A3, DDC, TH, COMT) using a large, ethnically homogeneous sample with severe AD (n = 545) and screened controls (n = 509). We collected genotypes from linkage disequilibrium (LD)-tagging single nucleotide polymorphisms (SNPs) and employed a gene-based method of correction. We tested for association with AD diagnosis in cases and controls and with a variety of alcohol-related traits (including age-at-onset, initial sensitivity, tolerance, maximum daily drinks, and a withdrawal factor score), disinhibitory symptoms, and a disinhibitory factor score in cases only. A total of 135 SNPs were genotyped using the Illumina GoldenGate and Taqman Assays-on-Demand protocols. Results Of the 101 SNPs entered into standard analysis, 6 independent SNPs from 5 DA genes were associated with AD or a quantitative alcohol-related trait. Two SNPs across 2 genes were associated with a disinhibitory symptom count, while 1 SNP in DRD5 was positive for association with the general disinhibitory factor score. Conclusions Our study provides evidence of modest associations between a small number of DA-related genes and AD as well as a range of alcohol-related traits and measures of behavioral disinhibition. While we did conduct gene-based correction for multiple testing, we did not correct for multiple traits because the traits are correlated. However, false-positive findings remain possible, so our results must be interpreted with caution. PMID:21083670
Thermal corrections to the Casimir energy in a general weak gravitational field
NASA Astrophysics Data System (ADS)
Nazari, Borzoo
2016-12-01
We calculate finite temperature corrections to the energy of the Casimir effect of a two conducting parallel plates in a general weak gravitational field. After solving the Klein-Gordon equation inside the apparatus, mode frequencies inside the apparatus are obtained in terms of the parameters of the weak background. Using Matsubara’s approach to quantum statistical mechanics gravity-induced thermal corrections of the energy density are obtained. Well-known weak static and stationary gravitational fields are analyzed and it is found that in the low temperature limit the energy of the system increases compared to that in the zero temperature case.
On the Concept of Varying Influence Radii for a Successive Corrections Objective Analysis
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.
1991-01-01
There has been a long standing concept by those who use successive corrections objective analysis that the way to obtain the most accurate objective analysis is first, to analyze for the long wavelengths and then to build in the details of the shorter wavelengths by successively decreasing the influence of the more distant observations upon the interpolated values. Using the Barnes method, the filter characteristics were compared for families of response curves that pass through a common point at a reference wavelength. It was found that the filter cutoff is a maximum if the filter parameters that determine the influence of observations are unchanged for both the initial and corrections passes. This information was used to define and test the following hypothesis. If accuracy is defined by how well the method retains desired wavelengths and removes undesired wavelengths, then the Barnes method gives the most accurate analyses if the filter parameter on the initial and corrections passes are the same. This hypothesis does not follow the usual conceptual approach to successive corrections analysis.
Radiated BPF sound measurement of centrifugal compressor
NASA Astrophysics Data System (ADS)
Ohuchida, S.; Tanaka, K.
2013-12-01
A technique to measure radiated BPF sound from an automotive turbocharger compressor impeller is proposed in this paper. Where there are high-level background noises in the measurement environment, it is difficult to discriminate the target component from the background. Since the effort of measuring BPF sound was taken in a room with such condition in this study, no discrete BPF peak was initially found on the sound spectrum. Taking its directionality into consideration, a microphone covered with a parabolic cone was selected and using this technique, the discrete peak of BPF was clearly observed. Since the level of measured sound was amplified due to the area-integration effect, correction was needed to obtain the real level. To do so, sound measurements with and without a parabolic cone were conducted for the fixed source and their level differences were used as correction factors. Consideration is given to the sound propagation mechanism utilizing measured BPF as well as the result of a simple model experiment. The present method is generally applicable to sound measurements conducted with a high level of background noise.
2014-03-27
14 Mar 2014 David J. Bunker, Ph.D. (Chairman) Date ____________//signed//_________________ 14 Mar 2014 Tay W. Johannes, Ph.D...Lt Col, USAF (Member) Date ____________//signed//_________________ 12 Mar 2014 Benjamin R. Kowash, Ph.D., Maj, USAF (Member) Date AFIT-ENP...by Test Date ........................ 28 Figure 3: Comparison of background spectra from 6 October (blue) and 16 September (green
Impact of Aerosols on Scene Collection and Scene Correction
2009-03-01
the atmosphere on the way to the satellite. In order for a satellite- borne sensor to distinguish a target from its background, the difference between...the target and background top of the atmosphere radiance ( TLΔ ) must be greater than the sensor radiance sensitivity ( sLΔ ). The difference ...northwesterly, with prevailing surface visibilities between four and seven miles in dust, sand, or haze. Stronger flow over northern Saudi Arabia can loft
Sea Surface Signature of Tropical Cyclones Using Microwave Remote Sensing
2013-01-01
due to the ionosphere and troposphere, which have to be compensated for, and components due to the galactic and cosmic background radiation those...and corrections for sun glint, galactic and cosmic background radiation, and Stokes effects of the ionosphere. The accuracy of a given retrieval...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) Sea surface signature of tropical cyclones using microwave remote sensing Bumjun Kil
Code of Federal Regulations, 2014 CFR
2014-07-01
... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...
Code of Federal Regulations, 2011 CFR
2011-07-01
... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...
Code of Federal Regulations, 2013 CFR
2013-07-01
... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...
Code of Federal Regulations, 2012 CFR
2012-07-01
... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...
Code of Federal Regulations, 2010 CFR
2010-07-01
... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...
Precise predictions for V+jets dark matter backgrounds
NASA Astrophysics Data System (ADS)
Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.
2017-12-01
High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.
Compensating for magnetic field inhomogeneity in multigradient-echo-based MR thermometry.
Simonis, Frank F J; Petersen, Esben T; Bartels, Lambertus W; Lagendijk, Jan J W; van den Berg, Cornelis A T
2015-03-01
MR thermometry (MRT) is a noninvasive method for measuring temperature that can potentially be used for radio frequency (RF) safety monitoring. This application requires measuring absolute temperature. In this study, a multigradient-echo (mGE) MRT sequence was used for that purpose. A drawback of this sequence, however, is that its accuracy is affected by background gradients. In this article, we present a method to minimize this effect and to improve absolute temperature measurements using MRI. By determining background gradients using a B0 map or by combining data acquired with two opposing readout directions, the error can be removed in a homogenous phantom, thus improving temperature maps. All scans were performed on a 3T system using ethylene glycol-filled phantoms. Background gradients were varied, and one phantom was uniformly heated to validate both compensation approaches. Independent temperature recordings were made with optical probes. Errors correlated closely to the background gradients in all experiments. Temperature distributions showed a much smaller standard deviation when the corrections were applied (0.21°C vs. 0.45°C) and correlated well with thermo-optical probes. The corrections offer the possibility to measure RF heating in phantoms more precisely. This allows mGE MRT to become a valuable tool in RF safety assessment. © 2014 Wiley Periodicals, Inc.
Erdogan, Ercan; Akkaya, Mehmet; Bacaksız, Ahmet; Tasal, Abdurrahman; Sönmez, Osman; Asoglu, Emin; Kul, Seref; Sahın, Musa; Turfan, Murat; Vatankulu, Mehmet Akif; Göktekin, Omer
2013-01-01
Background QT dispersion (QTd), which is a measure of inhomogeneity of myocardial repolarization, increases following impaired myocardial perfusion. Its prolongation may provide a suitable substrate for life-threatening ventricular arrhythmias. We investigated the changes in QTd and heart rate variability (HRV) parameters after successful coronary artery revascularization in a patient with chronic total occlusions (CTO). Material/Methods This study included 139 successfully revascularized CTO patients (118 men, 21 women, mean age 58.3±9.6 years). QTd was measured from a 12-lead electrocardiogram and was defined as the difference between maximum and minimum QT interval. HRV analyses of all subjects were obtained. Frequency domain (LF: HF) and time domain (SDNN, pNN50, and rMSSD) parameters were analyzed. QT intervals were also corrected for heart rate using Bazett’s formula, and the corrected QT interval dispersion (QTcd) was then calculated. All measurements were made before and after percutaneous coronary intervention (PCI). Results Both QTd and QTcd showed significant improvement following successful revascularization of CTO (55.83±14.79 to 38.87±11.69; p<0.001 and 61.02±16.28 to 42.92±13.41; p<0.001). The revascularization of LAD (n=38), Cx (n=28) and RCA (n=73) resulted in decrease in HRV indices, including SDDN, rMSSD, and pNN50, but none of the variables reached statistical significance. Conclusions Successful revascularization of CTO may result in improvement in regional heterogeneity of myocardial repolarization, evidenced as decreased QTcd after the PCI. The revascularization in CTO lesions does not seem to have a significant impact on HRV. PMID:23969577
Sandberg, S; Järvenpää, S; Penttinen, A; Paton, J Y; McCann, D C
2004-12-01
A recent prospective study of children with asthma employing a within subject, over time analysis using dynamic logistic regression showed that severely negative life events significantly increased the risk of an acute exacerbation during the subsequent 6 week period. The timing of the maximum risk depended on the degree of chronic psychosocial stress also present. A hierarchical Cox regression analysis was undertaken to examine whether there were any immediate effects of negative life events in children without a background of high chronic stress. Sixty children with verified chronic asthma were followed prospectively for 18 months with continuous monitoring of asthma by daily symptom diaries and peak flow measurements, accompanied by repeated interview assessments of life events. The key outcome measures were asthma exacerbations and severely negative life events. An immediate effect evident within the first 2 days following a severely negative life event increased the risk of a new asthma attack by a factor of 4.69, 95% confidence interval 2.33 to 9.44 (p<0.001) [corrected] In the period 3-10 days after a severe event there was no increased risk of an asthma attack (p = 0.5). In addition to the immediate effect, an increased risk of 1.81 (95% confidence interval 1.24 to 2.65) [corrected] was found 5-7 weeks after a severe event (p = 0.002). This is consistent with earlier findings. There was a statistically significant variation due to unobserved factors in the incidence of asthma attacks between the children. The use of statistical methods capable of investigating short time lags showed that stressful life events significantly increase the risk of a new asthma attack immediately after the event; a more delayed increase in risk was also evident 5-7 weeks later.
2013-01-01
Background A microclimate monitoring study was conducted in 2008 aimed at assessing the conservation risks affecting the valuable wall paintings decorating Ariadne’s House (Pompeii, Italy). It was found that thermohygrometric conditions were very unfavorable for the conservation of frescoes. As a result, it was decided to implement corrective measures, and the transparent polycarbonate sheets covering three rooms (one of them delimited by four walls and the others composed of three walls) were replaced by opaque roofs. In order to examine the effectiveness of this measure, the same monitoring system comprised by 26 thermohygrometric probes was installed again in summer 2010. Data recorded in 2008 and 2010 were compared. Results Microclimate conditions were also monitored in a control room with the same roof in both years. The average temperature in this room was lower in 2010, and it was decided to consider a time frame of 18 summer days with the same mean temperature in both years. In the rooms with three walls, the statistical analysis revealed that the diurnal maximum temperature decreased about 3.5°C due to the roof change, and the minimum temperature increased 0.5°C. As a result, the daily thermohygrometric variations resulted less pronounced in 2010, with a reduction of approximately 4°C, which is favorable for the preservation of mural paintings. In the room with four walls, the daily fluctuations also decreased about 4°C. Based on the results, other alternative actions are discussed aimed at improving the conservation conditions of wall paintings. Conclusions The roof change has reduced the most unfavorable thermohygrometric conditions affecting the mural paintings, but additional actions should be adopted for a long term preservation of Pompeian frescoes. PMID:23683173
Far Ultraviolet Imaging from the Image Spacecraft
NASA Technical Reports Server (NTRS)
Mende, S. B.; Heetderks, H.; Frey, H. U.; Lampton, M.; Geller, S. P.; Stock, J. M.; Abiad, R.; Siegmund, O. H. W.; Tremsin, A. S.; Habraken, S.
2000-01-01
Direct imaging of the magnetosphere by the IMAGE spacecraft will be supplemented by observation of the global aurora. The IMAGE satellite instrument complement includes three Far Ultraviolet (FUV) instruments. The Wideband Imaging Camera (WIC) will provide broad band ultraviolet images of the aurora for maximum spatial and temporal resolution by imaging the LBH N2 bands of the aurora. The Spectrographic Imager (SI), a novel form of monochromatic imager, will image the aurora, filtered by wavelength. The proton-induced component of the aurora will be imaged separately by measuring the Doppler-shifted Lyman-a. Finally, the GEO instrument will observe the distribution of the geocoronal emission to obtain the neutral background density source for charge exchange in the magnetosphere. The FUV instrument complement looks radially outward from the rotating IMAGE satellite and, therefore, it spends only a short time observing the aurora and the Earth during each spin. To maximize photon collection efficiency and use efficiently the short time available for exposures the FUV auroral imagers WIC and SI both have wide fields of view and take data continuously as the auroral region proceeds through the field of view. To minimize data volume, the set of multiple images are electronically co-added by suitably shifting each image to compensate for the spacecraft rotation. In order to minimize resolution loss, the images have to be distort ion-corrected in real time. The distortion correction is accomplished using high speed look up tables that are pre-generated by least square fitting to polynomial functions by the on-orbit processor. The instruments were calibrated individually while on stationary platforms, mostly in vacuum chambers. Extensive ground-based testing was performed with visible and near UV simulators mounted on a rotating platform to emulate their performance on a rotating spacecraft.
Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties
NASA Astrophysics Data System (ADS)
Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.
Entanglement asymmetry for boosted black branes and the bound
NASA Astrophysics Data System (ADS)
Mishra, Rohit; Singh, Harvendra
2017-06-01
We study the effects of asymmetry in the entanglement thermodynamics of CFT subsystems. It is found that “boosted” Dp-brane backgrounds give rise to the first law of the entanglement thermodynamics where the CFT pressure asymmetry plays a decisive role in the entanglement. Two different strip like subsystems, one parallel to the boost and the other perpendicular, are studied in the perturbative regime Tthermal ≪ TE. We mainly seek to quantify this entanglement asymmetry as a ratio of the first-order entanglement entropies of the excitations. We discuss the AdS-wave backgrounds at zero temperature having maximum asymmetry from where a bound on entanglement asymmetry is obtained. The entanglement asymmetry reduces as we switch on finite temperature in the CFT while it is maximum at zero temperature.
1995-07-01
designated pixel. OTF analysis will be similar to the analysis discussed previously. Any nonuniformity in the response of the chosen pixel to the...not seen by the trace. Nonuniformity of the pixel response must be also be taken into account. Background measurements of the maximum and minimum...to the background field of regard. To incorporate and support interactive CLDWSG operation and to accommodate simulation of nonuniform anisoplanatic
ERIC Educational Resources Information Center
Gasparinatou, Alexandra; Grigoriadou, Maria
2013-01-01
In this study, we examine the effect of background knowledge and local cohesion on learning from texts. The study is based on construction-integration model. Participants were 176 undergraduate students who read a Computer Science text. Half of the participants read a text of maximum local cohesion and the other a text of minimum local cohesion.…
Foltran, Fabiana A; Silva, Luciana C C B; Sato, Tatiana O; Coury, Helenice J C G
2013-01-01
The recording of human movement is an essential requirement for biomechanical, clinical, and occupational analysis, allowing assessment of postural variation, occupational risks, and preventive programs in physical therapy and rehabilitation. The flexible electrogoniometer (EGM), considered a reliable and accurate device, is used for dynamic recordings of different joints. Despite these advantages, the EGM is susceptible to measurement errors, known as crosstalk. There are two known types of crosstalk: crosstalk due to sensor rotation and inherent crosstalk. Correction procedures have been proposed to correct these errors; however no study has used both procedures in clinical measures for wrist movements with the aim to optimize the correction. To evaluate the effects of mathematical correction procedures on: 1) crosstalk due to forearm rotation, 2) inherent sensor crosstalk; and 3) the combination of these two procedures. 43 healthy subjects had their maximum range of motion of wrist flexion/extension and ulnar/radials deviation recorded by EGM. The results were analyzed descriptively, and procedures were compared by differences. There was no significant difference in measurements before and after the application of correction procedures (P<0.05). Furthermore, the differences between the correction procedures were less than 5° in most cases, having little impact on the measurements. Considering the time-consuming data analysis, the specific technical knowledge involved, and the inefficient results, the correction procedures are not recommended for wrist recordings by EGM.
NASA Astrophysics Data System (ADS)
Chen, B.; Su, J. H.; Guo, L.; Chen, J.
2017-06-01
This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.
Maximum aposteriori joint source/channel coding
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Gibson, Jerry D.
1991-01-01
A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
Image improvement from a sodium-layer laser guide star adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Max, C. E., LLNL
1997-06-01
A sodium-layer laser guide star beacon with high-order adaptive optics at Lick Observatory produced a factor of 2.4 intensity increase and a factor of 2 decrease in full width at half maximum for an astronomical point source, compared with image motion compensation alone. Image full widths at half maximum were identical for laser and natural guide stars (0.3 arc seconds). The Strehl ratio with the laser guide star was 65% of that with a natural guide star. This technique should allow ground-based telescopes to attain the diffraction limit, by correcting for atmospheric distortions.
Determination of the performance of the Kaplan hydraulic turbines through simplified procedure
NASA Astrophysics Data System (ADS)
Pădureanu, I.; Jurcu, M.; Campian, C. V.; Haţiegan, C.
2018-01-01
A simplified procedure has been developed, compared to the complex one recommended by IEC 60041 (i.e. index samples), for measurement of the performance of the hydraulic turbines. The simplified procedure determines the minimum and maximum powers, the efficiency at maximum power, the evolution of powers by head and flow and to determine the correct relationship between runner/impeller blade angle and guide vane opening for most efficient operation of double-regulated machines. The simplified procedure can be used for a rapid and partial estimation of the performance of hydraulic turbines for repair and maintenance work.
Ito, Misae; Shimizu, Kimiya
2009-09-01
To the compare the reading ability after bilateral cataract surgery in patients who had pseudophakic monovision achieved by monofocal intraocular lens (IOL) implantation and patients who had refractive multifocal IOL implantation. Department of Ophthalmology, Kitasato University Hospital, Kanagawa, Japan. This study evaluated patients who had bilateral cataract surgery using the monovision method with monofocal IOL implantation to correct presbyopia (monovision group) or who had bilateral cataract surgery with refractive multifocal IOL implantation (multifocal group). In the monovision group, the dominant eye was corrected for distance vision and the nondominant eye for near vision. The maximum reading speed, critical character size, and reading acuity were measured binocularly without refractive correction using MNREAD-J acuity charts. The monovision group comprised 38 patients and the multifocal group, 22 patients. The mean maximum reading speed was 350.5 characters per minute (cpm) +/- 62.3 (SD) in the monovision group and 355.0 +/- 53.3 cpm in the multifocal group; the difference was not statistically significant. The mean critical character size was 0.24 +/- 0.12 logMAR and 0.40 +/- 0.16 logMAR, respectively (P<.05). The mean reading acuity was 0.05 +/- 0.12 logMAR and 0.19 +/- 0.11 logMAR, respectively (P<.01). The monovision group had better critical character size and reading acuity results. The monovision method group had better reading ability; however, careful patient selection is essential.
CORRECTION OF THE INERTIAL EFFECT RESULTING FROM A PLATE MOVING UNDER LOW FRICTION CONDITIONS
Yang, Feng; Pai, Yi-Chung
2007-01-01
The purpose of the present study was to develop a set of equations that can be employed to remove the inertial effect introduced by the movable platform upon which a person stands during a slip induced in gait; this allows the real ground reaction force (GRF) and its center of pressure (COP) to be determined. Analyses were also performed to determine how sensitive the COP offsets were to the changes of the parameters in the equation that affected the correction of the inertial effect. In addition, the results were verified empirically using a low friction movable platform together with a stationary object, a pendulum, and human subjects during a slip induced during gait. Our analyses revealed that the amount of correction required for the inertial effect due to the movable component is affected by its mass and its center of mass (COM) position, acceleration, the friction coefficient, and the landing position of the foot relative to the COM. The maximum error in the horizontal component of the GRF was close to 0.09 body weight during the recovery from a slip in walking. When uncorrected, the maximum error in the COP measurement could reach as much as 4 cm. Finally, these errors were magnified in the joint moment computation and propagated proximally, ranging from 0.2 to 1.0 Nm/body mass from the ankle to the hip. PMID:17306274
Tyl, Benoît; Kabbaj, Meriam; Azzam, Sara; Sologuren, Ander; Valiente, Román; Reinbolt, Elizabeth; Roupe, Kathryn; Blanco, Nathalie; Wheeler, William
2012-06-01
The effect of bilastine on cardiac repolarization was studied in 30 healthy participants during a multiple-dose, triple-dummy, crossover, thorough QT study that included 5 arms: placebo, active control (400 mg moxifloxacin), bilastine at therapeutic and supratherapeutic doses (20 mg and 100 mg once daily, respectively), and bilastine 20 mg administered with ketoconazole 400 mg. Time-matched, triplicate electrocardiograms (ECGs) were recorded with 13 time points extracted predose and 16 extracted over 72 hours post day 4 dosing. Four QT/RR corrections were implemented: QTcB; QTcF; a linear individual correction (QTcNi), the primary correction; and a nonlinear one (QTcNnl). Moxifloxacin was associated with a significant increase in QTcNi at all time points between 1 and 12 hours, inclusively. Bilastine administration at 20 mg and 100 mg had no clinically significant impact on QTc (maximum increase in QTcNi, 5.02 ms; upper confidence limit [UCL] of the 1-sided, 95% confidence interval, 7.87 ms). Concomitant administration of ketoconazole and bilastine 20 mg induced a clinically relevant increase in QTc (maximum increase in QTcNi, 9.3 ms; UCL, 12.16 ms). This result was most likely related to the cardiac effect of ketoconazole because for all time points, bilastine plasma concentrations were lower than those observed following the supratherapeutic dose.
Efficiency of nuclear and mitochondrial markers recovering and supporting known amniote groups.
Lambret-Frotté, Julia; Perini, Fernando Araújo; de Moraes Russo, Claudia Augusta
2012-01-01
We have analysed the efficiency of all mitochondrial protein coding genes and six nuclear markers (Adora3, Adrb2, Bdnf, Irbp, Rag2 and Vwf) in reconstructing and statistically supporting known amniote groups (murines, rodents, primates, eutherians, metatherians, therians). The efficiencies of maximum likelihood, Bayesian inference, maximum parsimony, neighbor-joining and UPGMA were also evaluated, by assessing the number of correct and incorrect recovered groupings. In addition, we have compared support values using the conservative bootstrap test and the Bayesian posterior probabilities. First, no correlation was observed between gene size and marker efficiency in recovering or supporting correct nodes. As expected, tree-building methods performed similarly, even UPGMA that, in some cases, outperformed other most extensively used methods. Bayesian posterior probabilities tend to show much higher support values than the conservative bootstrap test, for correct and incorrect nodes. Our results also suggest that nuclear markers do not necessarily show a better performance than mitochondrial genes. The so-called dependency among mitochondrial markers was not observed comparing genome performances. Finally, the amniote groups with lowest recovery rates were therians and rodents, despite the morphological support for their monophyletic status. We suggest that, regardless of the tree-building method, a few carefully selected genes are able to unfold a detailed and robust scenario of phylogenetic hypotheses, particularly if taxon sampling is increased.
Concentrating Solar Power Projects - Solaben 6 | Concentrating Solar Power
: Operational Start Year: 2013 Do you have more information, corrections, or comments? Background Technology MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: August 2013
Concentrating Solar Power Projects - Waad Al Shamal ISCC Plant |
construction Start Year: 2018 Do you have more information, corrections, or comments? Background Technology Solar Start Production: 2018 Participants Developer(s): General Electric Plant Configuration Solar Field
Concentrating Solar Power Projects - Solaben 1 | Concentrating Solar Power
: Operational Start Year: 2013 Do you have more information, corrections, or comments? Background Technology MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: August 2013
Concentrating Solar Power Projects - Greenway CSP Mersin Tower Plant |
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background CSP Start Production: 2012 Project Type: Demonstration Participants Developer(s): Greenway CSP Owner(s
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
A semi-analytic dynamical friction model for cored galaxies
NASA Astrophysics Data System (ADS)
Petts, J. A.; Read, J. I.; Gualandris, A.
2016-11-01
We present a dynamical friction model based on Chandrasekhar's formula that reproduces the fast inspiral and stalling experienced by satellites orbiting galaxies with a large constant density core. We show that the fast inspiral phase does not owe to resonance. Rather, it owes to the background velocity distribution function for the constant density core being dissimilar from the usually assumed Maxwellian distribution. Using the correct background velocity distribution function and our semi-analytic model from previous work, we are able to correctly reproduce the infall rate in both cored and cusped potentials. However, in the case of large cores, our model is no longer able to correctly capture core-stalling. We show that this stalling owes to the tidal radius of the satellite approaching the size of the core. By switching off dynamical friction when rt(r) = r (where rt is the tidal radius at the satellite's position), we arrive at a model which reproduces the N-body results remarkably well. Since the tidal radius can be very large for constant density background distributions, our model recovers the result that stalling can occur for Ms/Menc ≪ 1, where Ms and Menc are the mass of the satellite and the enclosed galaxy mass, respectively. Finally, we include the contribution to dynamical friction that comes from stars moving faster than the satellite. This next-to-leading order effect becomes the dominant driver of inspiral near the core region, prior to stalling.
Hackley, Paul C.; Araujo, Carla Viviane; Borrego, Angeles G.; Bouzinos, Antonis; Cardott, Brian; Cook, Alan C.; Eble, Cortland; Flores, Deolinda; Gentzis, Thomas; Gonçalves, Paula Alexandra; Filho, João Graciano Mendonça; Hámor-Vidó, Mária; Jelonek, Iwona; Kommeren, Kees; Knowles, Wayne; Kus, Jolanta; Mastalerz, Maria; Menezes, Taíssa Rêgo; Newman, Jane; Pawlewicz, Mark; Pickel, Walter; Potter, Judith; Ranasinghe, Paddy; Read, Harold; Reyes, Julito; Rodriguez, Genaro De La Rosa; de Souza, Igor Viegas Alves Fernandes; Suarez-Ruiz, Isabel; Sýkorová, Ivana; Valentine, Brett J.
2015-01-01
Vitrinite reflectance generally is considered the most robust thermal maturity parameter available for application to hydrocarbon exploration and petroleum system evaluation. However, until 2011 there was no standardized methodology available to provide guidelines for vitrinite reflectance measurements in shale. Efforts to correct this deficiency resulted in publication of ASTM D7708: Standard test method for microscopical determination of the reflectance of vitrinite dispersed in sedimentary rocks. In 2012-2013, an interlaboratory exercise was conducted to establish precision limits for the D7708 measurement technique. Six samples, representing a wide variety of shale, were tested in duplicate by 28 analysts in 22 laboratories from 14 countries. Samples ranged from immature to overmature (0.31-1.53% Ro), from organic-lean to organic-rich (1-22 wt.% total organic carbon), and contained Type I (lacustrine), Type II (marine), and Type III (terrestrial) kerogens. Repeatability limits (maximum difference between valid repetitive results from same operator, same conditions) ranged from 0.03-0.11% absolute reflectance, whereas reproducibility limits (maximum difference between valid results obtained on same test material by different operators, different laboratories) ranged from 0.12-0.54% absolute reflectance. Repeatability and reproducibility limits degraded consistently with increasing maturity and decreasing organic content. However, samples with terrestrial kerogens (Type III) fell off this trend, showing improved levels of reproducibility due to higher vitrinite content and improved ease of identification. Operators did not consistently meet the reporting requirements of the test method, indicating that a common reporting template is required to improve data quality. The most difficult problem encountered was the petrographic distinction of solid bitumens and low-reflecting inert macerals from vitrinite when vitrinite occurred with reflectance ranges overlapping the other components. Discussion among participants suggested this problem could not be easily corrected via kerogen concentration or solvent extraction and is related to operator training and background. No statistical difference in mean reflectance was identified between participants reporting bitumen reflectance vs. vitrinite reflectance vs. a mixture of bitumen and vitrinite reflectance values, suggesting empirical conversion schemes should be treated with caution. Analysis of reproducibility limits obtained during this exercise in comparison to reproducibility limits from historical interlaboratory exercises suggests use of a common methodology (D7708) improves interlaboratory precision. Future work will investigate opportunities to improve reproducibility in high maturity, organic-lean shale varieties.
Parker, Glendon J.; Leppert, Tami; Anex, Deon S.; ...
2016-09-07
Human identification from biological material is largely dependent on the ability to characterize genetic polymorphisms in DNA. Unfortunately, DNA can degrade in the environment, sometimes below the level at which it can be amplified by PCR. Protein however is chemically more robust than DNA and can persist for longer periods. Protein also contains genetic variation in the form of single amino acid polymorphisms. These can be used to infer the status of non-synonymous single nucleotide polymorphism alleles. To demonstrate this, we used mass spectrometry-based shotgun proteomics to characterize hair shaft proteins in 66 European-American subjects. A total of 596 singlemore » nucleotide polymorphism alleles were correctly imputed in 32 loci from 22 genes of subjects’ DNA and directly validated using Sanger sequencing. Estimates of the probability of resulting individual non-synonymous single nucleotide polymorphism allelic profiles in the European population, using the product rule, resulted in a maximum power of discrimination of 1 in 12,500. Imputed non-synonymous single nucleotide polymorphism profiles from European–American subjects were considerably less frequent in the African population (maximum likelihood ratio = 11,000). The converse was true for hair shafts collected from an additional 10 subjects with African ancestry, where some profiles were more frequent in the African population. Genetically variant peptides were also identified in hair shaft datasets from six archaeological skeletal remains (up to 260 years old). Furthermore, this study demonstrates that quantifiable measures of identity discrimination and biogeographic background can be obtained from detecting genetically variant peptides in hair shaft protein, including hair from bioarchaeological contexts.« less
Naber, Marnix; Stoll, Josef; Einhäuser, Wolfgang; Carter, Olivia
2013-01-01
Pupil dilation is implicated as a marker of decision-making as well as of cognitive and emotional processes. Here we tested whether individuals can exploit another’s pupil to their advantage. We first recorded the eyes of 3 "opponents", while they were playing a modified version of the "rock-paper-scissors" childhood game. The recorded videos served as stimuli to a second set of participants. These "players" played rock-paper-scissors against the pre-recorded opponents in a variety of conditions. When players just observed the opponents’ eyes without specific instruction their probability of winning was at chance. When informed that the time of maximum pupil dilation was indicative of the opponents’ choice, however, players raised their winning probability significantly above chance. When just watching the reconstructed area of the pupil against a gray background, players achieved similar performance, showing that players indeed exploited the pupil, rather than other facial cues. Since maximum pupil dilation was correct about the opponents’ decision only in 60% of trials (chance 33%), we finally tested whether increasing this validity to 100% would allow spontaneous learning. Indeed, when players were given no information, but the pupil was informative about the opponent’s response in all trials, players performed significantly above chance on average and half (5/10) reached significance at an individual level. Together these results suggest that people can in principle use the pupil to detect cognitive decisions in another individual, but that most people have neither explicit knowledge of the pupil’s utility nor have they learnt to use it despite a lifetime of exposure. PMID:23991185
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Glendon J.; Leppert, Tami; Anex, Deon S.
Human identification from biological material is largely dependent on the ability to characterize genetic polymorphisms in DNA. Unfortunately, DNA can degrade in the environment, sometimes below the level at which it can be amplified by PCR. Protein however is chemically more robust than DNA and can persist for longer periods. Protein also contains genetic variation in the form of single amino acid polymorphisms. These can be used to infer the status of non-synonymous single nucleotide polymorphism alleles. To demonstrate this, we used mass spectrometry-based shotgun proteomics to characterize hair shaft proteins in 66 European-American subjects. A total of 596 singlemore » nucleotide polymorphism alleles were correctly imputed in 32 loci from 22 genes of subjects’ DNA and directly validated using Sanger sequencing. Estimates of the probability of resulting individual non-synonymous single nucleotide polymorphism allelic profiles in the European population, using the product rule, resulted in a maximum power of discrimination of 1 in 12,500. Imputed non-synonymous single nucleotide polymorphism profiles from European–American subjects were considerably less frequent in the African population (maximum likelihood ratio = 11,000). The converse was true for hair shafts collected from an additional 10 subjects with African ancestry, where some profiles were more frequent in the African population. Genetically variant peptides were also identified in hair shaft datasets from six archaeological skeletal remains (up to 260 years old). Furthermore, this study demonstrates that quantifiable measures of identity discrimination and biogeographic background can be obtained from detecting genetically variant peptides in hair shaft protein, including hair from bioarchaeological contexts.« less
Miko, Benjamin A.; Befus, Montina; Herzig, Carolyn T. A.; Mukherjee, Dhritiman V.; Apa, Zoltan L.; Bai, Ruo Yu; Tanner, Joshua P.; Gage, Dana; Genovese, Maryann; Koenigsmann, Carl J.; Larson, Elaine L.; Lowy, Franklin D.
2015-01-01
Background. Large outbreaks of Staphylococcus aureus (SA) infections have occurred in correctional facilities across the country. We aimed to define the epidemiological and microbiological determinants of SA infection in prisons to facilitate development of prevention strategies for this underserved population. Methods. We conducted a case-control study of SA infection at 2 New York State maximum security prisons. SA-infected inmates were matched with 3 uninfected controls. Subjects had cultures taken from sites of infection and colonization (nose and throat) and were interviewed via structured questionnaire. SA isolates were characterized by spa typing. Bivariate and multivariable analyses were conducted using conditional logistic regression. Results. Between March 2011 and January 2013, 82 cases were enrolled and matched with 246 controls. On bivariate analysis, the use of oral and topical antibiotics over the preceding 6 months was strongly associated with clinical infection (OR, 2.52; P < .001 and 4.38, P < .001, respectively). Inmates with clinical infection had 3.16 times the odds of being diabetic compared with inmates who did not have clinical infection (P < .001). Concurrent nasal and/or oropharyngeal colonization was also associated with an increased odds of infection (OR, 1.46; P = .002). Among colonized inmates, cases were significantly more likely to carry the SA clone spa t008 (usually representing the epidemic strain USA300) compared to controls (OR, 2.52; P = .01). Conclusions. Several inmate characteristics were strongly associated with SA infection in the prison setting. Although many of these factors were likely present prior to incarceration, they may help medical staff identify prisoners for targeted prevention strategies. PMID:25810281
Peskind, Elaine R.; Brody, David; Cernak, Ibolja; McKee, Ann; Ruff, Robert L.
2018-01-01
CME Background Articles are selected for credit designation based on an assessment of the educational needs of CME participants, with the purpose of providing readers with a curriculum of CME articles on a variety of topics throughout each volume. Activities are planned using a process that links identified needs with desired results. Participants may receive credit by reading the article, correctly answering at least 70% of the questions in the Posttest, and completing the Evaluation. The Posttest and Evaluation are now available online only at PSYCHIATRIST.COM (Keyword: February). CME Objective After studying the Commentary by Peskind et al, you should be able to: Screen patients who have experienced an event resulting in head injury for mild traumatic brain injury (mTBI) Treat mTBI according to the current guidelines for assessing and managing concussions and mTBI Accreditation Statement The CME Institute of Physicians Postgraduate Press, Inc., is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. Credit Designation The CME Institute of Physicians Postgraduate Press, Inc., designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 Credit™. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Note The American Academy of Physician Assistants (AAPA) accepts certificates of participation for educational activities certified for AMA PRA Category 1 Credit™ from organizations accredited by ACCME or a recognized state medical society. Physician assistants may receive a maximum of 1 hour of Category I credit for completing this program. Date of Original Release/Review This educational activity is eligible for AMA PRA Category 1 Credit™ through February 29, 2016. The latest review of this material was January 2013. PMID:23473351
Interpreting linear support vector machine models with heat map molecule coloring
2011-01-01
Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031
Szmacinski, Henryk; Toshchakov, Vladimir; Lakowicz, Joseph R.
2014-01-01
Abstract. Protein-protein interactions in cells are often studied using fluorescence resonance energy transfer (FRET) phenomenon by fluorescence lifetime imaging microscopy (FLIM). Here, we demonstrate approaches to the quantitative analysis of FRET in cell population in a case complicated by a highly heterogeneous donor expression, multiexponential donor lifetime, large contribution of cell autofluorescence, and significant presence of unquenched donor molecules that do not interact with the acceptor due to low affinity of donor-acceptor binding. We applied a multifrequency phasor plot to visualize FRET FLIM data, developed a method for lifetime background correction, and performed a detailed time-resolved analysis using a biexponential model. These approaches were applied to study the interaction between the Toll Interleukin-1 receptor (TIR) domain of Toll-like receptor 4 (TLR4) and the decoy peptide 4BB. TLR4 was fused to Cerulean fluorescent protein (Cer) and 4BB peptide was labeled with Bodipy TMRX (BTX). Phasor displays for multifrequency FLIM data are presented. The analytical procedure for lifetime background correction is described and the effect of correction on FLIM data is demonstrated. The absolute FRET efficiency was determined based on the phasor plot display and multifrequency FLIM data analysis. The binding affinity between TLR4-Cer (donor) and decoy peptide 4BB-BTX (acceptor) was estimated in a heterogeneous HeLa cell population. PMID:24770662
Watkins, J. M.; Weidel, Brian M.; Rudstam, L. G.; Holek, K. T.
2014-01-01
Increasing water clarity in Lake Ontario has led to a vertical redistribution of phytoplankton and an increased importance of the deep chlorophyll layer in overall primary productivity. We used in situ fluorometer profiles collected in lakewide surveys of Lake Ontario in 2008 to assess the spatial extent and intensity of the deep chlorophyll layer. In situ fluorometer data were corrected with extracted chlorophyll data using paired samples from Lake Ontario collected in August 2008. The deep chlorophyll layer was present offshore during the stratified conditions of late July 2008 with maximum values from 4-13 μg l-1 corrected chlorophyll a at 10 to 17 m depth within the metalimnion. Deep chlorophyll layer was closely associated with the base of the thermocline and a subsurface maximum of dissolved oxygen, indicating the feature's importance as a growth and productivity maximum. Crucial to the deep chlorophyll layer formation, the photic zone extended deeper than the surface mixed layer in mid-summer. The layer extended through most of the offshore in July 2008, but was not present in the easternmost transect that had a deeper surface mixed layer. By early September 2008, the lakewide deep chlorophyll layer had dissipated. A similar formation and dissipation was observed in the lakewide survey of Lake Ontario in 2003.
Centeno, Maria; Tierney, Tim M; Perani, Suejen; Shamshiri, Elhum A; St Pier, Kelly; Wilkinson, Charlotte; Konn, Daniel; Vulliemoz, Serge; Grouiller, Frédéric; Lemieux, Louis; Pressler, Ronit M; Clark, Christopher A; Cross, J Helen; Carmichael, David W
2017-08-01
Surgical treatment in epilepsy is effective if the epileptogenic zone (EZ) can be correctly localized and characterized. Here we use simultaneous electroencephalography-functional magnetic resonance imaging (EEG-fMRI) data to derive EEG-fMRI and electrical source imaging (ESI) maps. Their yield and their individual and combined ability to (1) localize the EZ and (2) predict seizure outcome were then evaluated. Fifty-three children with drug-resistant epilepsy underwent EEG-fMRI. Interictal discharges were mapped using both EEG-fMRI hemodynamic responses and ESI. A single localization was derived from each individual test (EEG-fMRI global maxima [GM]/ESI maximum) and from the combination of both maps (EEG-fMRI/ESI spatial intersection). To determine the localization accuracy and its predictive performance, the individual and combined test localizations were compared to the presumed EZ and to the postsurgical outcome. Fifty-two of 53 patients had significant maps: 47 of 53 for EEG-fMRI, 44 of 53 for ESI, and 34 of 53 for both. The EZ was well characterized in 29 patients; 26 had an EEG-fMRI GM localization that was correct in 11, 22 patients had ESI localization that was correct in 17, and 12 patients had combined EEG-fMRI and ESI that was correct in 11. Seizure outcome following resection was correctly predicted by EEG-fMRI GM in 8 of 20 patients, and by the ESI maximum in 13 of 16. The combined EEG-fMRI/ESI region entirely predicted outcome in 9 of 9 patients, including 3 with no lesion visible on MRI. EEG-fMRI combined with ESI provides a simple unbiased localization that may predict surgery better than each individual test, including in MRI-negative patients. Ann Neurol 2017;82:278-287. © 2017 American Neurological Association.
Effect of Receiver Choosing on Point Positions Determination in Network RTK
NASA Astrophysics Data System (ADS)
Bulbul, Sercan; Inal, Cevat
2016-04-01
Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.
Patil, Prateek C; Rathod, Ashok K; Borde, Mandar; Singh, Vishwajeet; Singh, Hemant U
2016-12-01
Traditionally, surgical intervention for patients with a spinal deformity has been considered for cosmetic benefits, but surgical intervention can alter the lung physiology or volumes and in turn leads to increase in physical capacity and exercise tolerance. Therefore, we conducted this to determine whether a surgical correction would restore the lung physiology, physical capacity and exercise tolerance in patients with kyphoscoliosis. To evaluate the usage of six-minute walk test scores and modified Borg scores as tools/measures for exercise tolerance in patients with spinal deformity and to study the effects of surgical correction of spinal deformity on exercise tolerance with above parameters as the measures. Thirty patients with spinal deformity, who had undergone surgery for deformity correction, were evaluated. All patients were investigated pre-operatively with x-rays of the spine (anteroposterior and lateral views). Clinical tests like breath holding time (after full inspiration) in number of seconds, modified Borg scores, six-minute walk test scores (heart rate, respiratory rate, maximum distance walked); were recorded as measures of exercise tolerance. The patients were followed up on the first, third, sixth and twelfth month post-operatively and tested clinically for breath holding time, modified Borg scores, six-minute walk test scores (heart rate, respiratory rate, maximum distance walked) and x-rays of the spine (anteroposterior and lateral views). In our study, breath holding time (p-value = 0.001) and modified Borg scores (p-value = 0.012) showed a significant improvement at 12 months post-operatively. We noted similar findings with heart rate, respiratory rate and maximum distance walked after a six-minute walk test. Improvements were noted in all the parameters, especially in the group of patients with greater than 60 degrees of cobb angle. However, the differences between the two groups (pre-operative cobb angle less than 60 degrees and pre-operative cobb angle more than 60 degrees) were not significant. The results were analysed and tested for significance using Student's t-test (paired and unpaired as appropriate) and Wilcoxon signed rank test. Surgical correction in cases of spinal deformity improves the cosmetic appearance and balance in the patients. Favourable results of surgical intervention were found in exercise tolerance with improvements in modified Borg scores, six-minute walk test results and breath holding time. The above parameters appear to be good tools for the assessment of physical capacity and exercise tolerance in patients with spinal deformity.
Meltzer, M I
1996-12-31
Adult male Amblyomma hebraeum tick infestations and the weights of 20 Brahman steers and 38 Mashona heifers were measured at different periods at the Veterinary Quarantine Area at Mbizi, Zimbabwe. The experiment for the Brahmans lasted 108 weeks and that for the Mashona for 113 weeks. The Brahman steers weighed a maximum average of 478.4 kg (SE 7.9 kg), which was significantly different to the Mashona heifers maximum average of 391.4 kg (SE 5.6 kg) (P < 0.001). The Brahmans had a maximum average of 112.1 (SE 18.5) adult males, while the Mashona heifers had a maximum average of 59.8 (SE 4.3). The difference was statistically significant (P < 0.05). There was no statistical difference between the two maximum average ticks per kilogram liveweight (P > 0.05). When differences in size are corrected for, then breed-related differences disappear. It is emphasized that the influence of confounding factors, especially time, cannot be corrected for in a satisfactory manner. Therefore, these statistical results should be regarded as illustrative rather than proof. To confirm these results, it is suggested that the authors of earlier studies should reanalyze their databases in a similar manner. It is important that such analyses be conducted, or new experiments carried out. Erroneous conclusions regarding the reason for different tick numbers between the breeds could result in farmers being incorrectly encouraged to utilize smaller breeds to obtain 'built-in' resistance to A. hebraeum ticks. One logical explanation for the size-related effect is that the males typically attach themselves around the belly and groin areas. Larger breeds of cattle, such as the Brahman, will naturally have larger surface areas of skin in the belly and groin regions than smaller breeds. Thus, it is suggested that there may be a simple physical explanation for the difference between breeds in the numbers of attached adult A. hebraeum males.
Noise level in intensive care units of a public university hospital in Santa Marta (Colombia).
Garrido Galindo, A P; Camargo Caicedo, Y; Vélez-Pereira, A M
2016-10-01
To evaluate the noise level in adult, pediatric and neonatal intensive care units of a university hospital in the city of Santa Marta (Colombia). A descriptive, observational, non-interventional study with follow-up over time was carried out. Continuous sampling was conducted for 20 days for each unit using a type i sound level meter, filter frequency in A weighting and Fast mode. We recorded the maximum values, the 90th percentile as background noise, and the continuous noise level. The mean hourly levels in the adult unit varied between 57.40±1.14-63.47±2.13dBA, with a maximum between 71.55±2.32-77.22±1.94dBA, and a background noise between 53.51±1.16-60.26±2.10dBA; in the pediatric unit the mean hourly levels varied between 57.07±3.07-65.72±2.46dBA, with a maximum of 68.69±3.57-79.06±2.34dBA, and a background noise between 53.33±3.54-61.96±2.85dBA; the neonatal unit in turn presented mean hourly values between 59.54±2.41-65.33±1.77dBA, with a maximum value between 67.20±2.13-77.65±3.74dBA, and a background noise between 55.02±2.03-58.70±1.95dBA. Analysis of variance revealed a significant difference between the hourly values and between the different units, with the time of day exhibiting a greater influence. The type of unit affects the noise levels in intensive care units, the pediatric unit showing the highest values and the adult unit the lowest values. However, the parameter exerting the greatest influence upon noise level is the time of day, with higher levels in the morning and evening, and lower levels at night and in the early morning. Copyright © 2015 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Durairaj, Chandrasekar; Ruiz-Garcia, Ana; Gauthier, Eric R; Huang, Xin; Lu, Dongrui R; Hoffman, Justin T; Finn, Richard S; Joy, Anil A; Ettl, Johannes; Rugo, Hope S; Zheng, Jenny; Wilner, Keith D; Wang, Diane D
2018-03-01
The aim of this study was to assess the potential effects of palbociclib in combination with letrozole on QTc. PALOMA-2, a phase 3, randomized, double-blind, placebo-controlled trial, compared palbociclib plus letrozole with placebo plus letrozole in postmenopausal women with estrogen receptor-positive, human epidermal growth factor receptor 2-negative advanced breast cancer. The study included a QTc evaluation substudy carried out as a definitive QT interval prolongation assessment for palbociclib. Time-matched triplicate ECGs were performed at 0, 2, 4, 6, and 8 h at baseline (Day 0) and on Cycle 1 Day 14. Additional ECGs were collected from all patients for safety monitoring. The QT interval was corrected for heart rate using Fridericia's correction (QTcF), Bazett's correction (QTcB), and a study-specific correction factor (QTcS). In total, 666 patients were randomized 2 : 1 to palbociclib plus letrozole or placebo plus letrozole. Of these, 125 patients were enrolled in the QTc evaluation substudy. No patients in the palbociclib plus letrozole arm of the substudy (N=77) had a maximum postbaseline QTcS or QTcF value of ≥ 480 ms, or a maximum increase from clock time-matched baseline for QTcS or QTcF values of ≥ 60 ms. The upper bounds of the one-sided 95% confidence interval for the mean change from time-matched baseline for QTcS, QTcF, and QTcB at all time points and at steady-state Cmax following repeated administration of 125 mg palbociclib were less than 10 ms. Palbociclib, when administered with letrozole at the recommended therapeutic dosing regimen, did not prolong the QT interval to a clinically relevant extent.
Clinical implementation of MOSFET detectors for dosimetry in electron beams.
Bloemen-van Gurp, Esther J; Minken, Andre W H; Mijnheer, Ben J; Dehing-Oberye, Cary J G; Lambin, Philippe
2006-09-01
To determine the factors converting the reading of a MOSFET detector placed on the patient's skin without additional build-up to the dose at the depth of dose maximum (D(max)) and investigate their feasibility for in vivo dose measurements in electron beams. Factors were determined to relate the reading of a MOSFET detector to D(max) for 4 - 15 MeV electron beams in reference conditions. The influence of variation in field size, SSD, angle and field shape on the MOSFET reading, obtained without additional build-up, was evaluated using 4, 8 and 15 MeV beams and compared to ionisation chamber data at the depth of dose maximum (z(max)). Patient entrance in vivo measurements included 40 patients, mostly treated for breast tumours. The MOSFET reading, converted to D(max), was compared to the dose prescribed at this depth. The factors to convert MOSFET reading to D(max) vary between 1.33 and 1.20 for the 4 and 15 MeV beams, respectively. The SSD correction factor is approximately 8% for a change in SSD from 95 to 100 cm, and 2% for each 5-cm increment above 100 cm SSD. A correction for fields having sides smaller than 6 cm and for irregular field shape is also recommended. For fields up to 20 x 20 cm(2) and for oblique incidence up to 45 degrees, a correction is not necessary. Patient measurements demonstrated deviations from the prescribed dose with a mean difference of -0.7% and a standard deviation of 2.9%. Performing dose measurements with MOSFET detectors placed on the patient's skin without additional build-up is a well suited technique for routine dose verification in electron beams, when applying the appropriate conversion and correction factors.
Pre-processing, registration and selection of adaptive optics corrected retinal images.
Ramaswamy, Gomathy; Devaney, Nicholas
2013-07-01
In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased sharpness over most of the field of view. Adaptive optics assisted images of the cone photoreceptors can be better pre-processed using a wavelet approach. These images can be assessed for image quality using a 'Designer Metric'. Two-stage image registration including correcting for rotation significantly improves the final image contrast and sharpness. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Concentrating Solar Power Projects - Liddell Power Station | Concentrating
: Linear Fresnel reflector Turbine Capacity: Net: 3.0 MW Gross: 3.0 MW Status: Currently Non-Operational Start Year: 2012 Do you have more information, corrections, or comments? Background Technology: Linear
76 FR 42732 - Importer of Controlled Substances; Notice of Registration
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... the Correction to Notice of Application pertaining to Rhodes Technologies, 72 FR 2417 (2007), comments... state and local laws, and a review of the company's background and history. Therefore, pursuant to 21 U...
Concentrating Solar Power Projects - Orellana | Concentrating Solar Power |
: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background Technology (Estimated) Contact(s): SolarPACES Start Production: August 2012 Cost (approx): 240,000,000 Euro PPA/Tariff
Concentrating Solar Power Projects - Solacor 2 | Concentrating Solar Power
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background : 100,000 MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: March 9
Concentrating Solar Power Projects - Aurora Solar Energy Project |
development Start Year: 2020 Do you have more information, corrections, or comments? Background Technology : 495,000 MWh/yr (Expected) Contact(s): Webmaster Solar Key References: Fact sheet Break Ground: 2018 Start
Concentrating Solar Power Projects - MINOS | Concentrating Solar Power |
development Start Year: 2020 Do you have more information, corrections, or comments? Background Technology ): Alex Phocas-Cosmetatos Company: Nur Energie Start Production: 2020 PPA/Tariff Type: Feed-In Tariff PPA
Concentrating Solar Power Projects - Shagaya CSP Project | Concentrating
construction Start Year: 2018 Do you have more information, corrections, or comments? Background Technology : 180,000 MWh/yr Contact(s): Webmaster Solar Start Production: 2018 Cost (approx): 385 US$ million PPA
Concentrating Solar Power Projects - Solaben 2 | Concentrating Solar Power
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background : 100,000 MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: October
Concentrating Solar Power Projects - Solacor 1 | Concentrating Solar Power
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background : 100,000 MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: February
Concentrating Solar Power Projects - Sundrop CSP Project | Concentrating
Start Year: 2016 Do you have more information, corrections, or comments? Background Technology: Power Ground: October 12, 2015 Start Production: October 6, 2016 Participants Developer(s): Aalborg CSP Owner(s
Concentrating Solar Power Projects - Solaben 3 | Concentrating Solar Power
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background : 100,000 MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: June
Maximum a posteriori resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, John A.; Jenkins, Chris; Calder, Brian
2006-08-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.
Huang, Jie; Shi, Tielin; Tang, Zirong; Zhu, Wei; Liao, Guanglan; Li, Xiaoping; Gong, Bo; Zhou, Tengyuan
2017-08-01
We propose a bi-objective optimization model for extracting optical fiber background from the measured surface-enhanced Raman spectroscopy (SERS) spectrum of the target sample in the application of fiber optic SERS. The model is built using curve fitting to resolve the SERS spectrum into several individual bands, and simultaneously matching some resolved bands with the measured background spectrum. The Pearson correlation coefficient is selected as the similarity index and its maximum value is pursued during the spectral matching process. An algorithm is proposed, programmed, and demonstrated successfully in extracting optical fiber background or fluorescence background from the measured SERS spectra of rhodamine 6G (R6G) and crystal violet (CV). The proposed model not only can be applied to remove optical fiber background or fluorescence background for SERS spectra, but also can be transferred to conventional Raman spectra recorded using fiber optic instrumentation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... § 60.704 is completed, but not later than 60 days after achieving the maximum production rate at which... first. Each owner or operator shall either: (a) Reduce emissions of TOC (less methane and ethane) by 98 weight-percent, or to a TOC (less methane and ethane) concentration of 20 ppmv, on a dry basis corrected...
Code of Federal Regulations, 2012 CFR
2012-07-01
... § 60.704 is completed, but not later than 60 days after achieving the maximum production rate at which... first. Each owner or operator shall either: (a) Reduce emissions of TOC (less methane and ethane) by 98 weight-percent, or to a TOC (less methane and ethane) concentration of 20 ppmv, on a dry basis corrected...
A Test-Length Correction to the Estimation of Extreme Proficiency Levels
ERIC Educational Resources Information Center
Magis, David; Beland, Sebastien; Raiche, Gilles
2011-01-01
In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…
40 CFR 1033.525 - Smoke testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... rectangular duct), you may align the beam to have a different path length and correct it to be equivalent to a... that maximum response below 430 nanometers and above 680 nanometers). (4) Attach a collimating tube to... light. (6) You may use an air curtain across the light source and detector window assemblies to minimize...
The Missing Link: Service-Learning as an Essential Tool for Correctional Education
ERIC Educational Resources Information Center
Frank, Jacquelyn B.; Omstead, Jon-Adam; Pigg, Steven Anthony
2012-01-01
This article reports the results of a Participatory Action Research (PAR) study conducted by a university faculty member and two incarcerated college graduates in Indiana. The research team designed and piloted a service-learning program specifically aimed at college-level inmates in a maximum security prison. This qualitative study used…
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Maximum Likelihood Detection of Electro-Optic Moving Targets
1992-01-16
indicates intensity. The Infrared Measurements Sensor (IRMS) is a scanning sensor that collects both long wave- length infrared ( LWIR , 8 to 12 fim...moving clutter. Nonstationary spatial statistics correspond to the nonuniform intensity of the background scene. An equivalent viewpoint is to...Figure 6 compares theory and experiment for 10 frames of the Longjump LWIR data obtained from the IRMS scanning sensor, which is looking at a background
Custer, T.W.; Mitchell, C.A.
1991-01-01
Willets (Catoptrophorus semipalmatus) were collected in June and August 1986 at the outlets of two agricultural drainages into the Lower Languna Madre of south Texas and at two other Texas coastal sites. Mean liver concentrations of arsenic was higher in August than June. Over 20% of the livers had arsenic concentrations elevated above a suggested background level of 5.0 ppm dry weight (DW), but concentrations (maximum 15 ppm) were below those associated with acute toxicity. Selenium concentration in livers varied from 2.3 to 8.3 ppm DW for all locations and represented background levels. Mercury concentrations in liver for all locations (mean = 2.0 to 3.4, maximum 17 ppm DW) were below those associated with avian mortality and similar to levels found in other estuarine/marine birds. DDE in carcasses was higher in adults (mean = 1.0 ppm wet weight) than juveniles (0.2 ppm), and higher in August (1.0 ppm) than June (0.5 ppm); however, DDE concentrations were generally at background levels. Based on brain cholinesterase activity, willets were not recently exposed to organophosphate pesticides.
Image-guided regularization level set evolution for MR image segmentation and bias field correction.
Wang, Lingfeng; Pan, Chunhong
2014-01-01
Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Midline shift and lateral guidance angle in adults with unilateral posterior crossbite.
Rilo, Benito; da Silva, José Luis; Mora, María Jesús; Cadarso-Suárez, Carmen; Santana, Urbano
2008-06-01
Unilateral posterior crossbite is a malocclusion that, if not corrected during infancy, typically causes permanent asymmetry. Our aims in this study were to evaluate various occlusal parameters in a group of adults with uncorrected unilateral posterior crossbite and to compare findings with those obtained in a group of normal subjects. Midline shift at maximum intercuspation, midline shift at maximum aperture, and lateral guidance angle in the frontal plane were assessed in 25 adults (ages, 17-26 years; mean, 19.6 years) with crossbites. Midline shift at maximum intercuspation was zero (ie, centric midline) in 36% of the crossbite subjects; the remaining subjects had a shift toward the crossbite side. Midline shift at maximum aperture had no association with crossbite side. Lateral guidance angle was lower on the crossbite side than on the noncrossbite side. No parameter studied showed significant differences with respect to the normal subjects. Adults with unilateral posterior crossbite have adaptations that compensate for the crossbite and maintain normal function.
Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity
NASA Astrophysics Data System (ADS)
Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.
2018-07-01
The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.
Infrared image segmentation method based on spatial coherence histogram and maximum entropy
NASA Astrophysics Data System (ADS)
Liu, Songtao; Shen, Tongsheng; Dai, Yao
2014-11-01
In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.
Electroweak Sudakov Corrections to New Physics Searches at the LHC
NASA Astrophysics Data System (ADS)
Chiesa, Mauro; Montagna, Guido; Barzè, Luca; Moretti, Mauro; Nicrosini, Oreste; Piccinini, Fulvio; Tramontano, Francesco
2013-09-01
We compute the one-loop electroweak Sudakov corrections to the production process Z(νν¯)+n jets, with n=1, 2, 3, in pp collisions at the LHC. It represents the main irreducible background to new physics searches at the energy frontier. The results are obtained at the leading and next-to-leading logarithmic accuracy by implementing the general algorithm of Denner and Pozzorini in the event generator for multiparton processes alpgen. For the standard selection cuts used by the ATLAS and CMS Collaborations, we show that the Sudakov corrections to the relevant observables can grow up to -40% at s=14TeV. We also include the contribution due to undetected real radiation of massive gauge bosons, to show to what extent the partial cancellation with the large negative virtual corrections takes place in realistic event selections.
Noctilucent cloud polarimetry: Twilight measurements in a wide range of scattering angles
NASA Astrophysics Data System (ADS)
Ugolnikov, Oleg S.; Maslov, Igor A.; Kozelov, Boris V.; Dlugach, Janna M.
2016-06-01
Wide-field polarization measurements of the twilight sky background during several nights with bright and extended noctilucent clouds in central and northern Russia in 2014 and 2015 are used to build the phase dependence of the degree of polarization of sunlight scattered by cloud particles in a wide range of scattering angles (from 40° to 130°). This range covers the linear polarization maximum near 90° and large-angle slope of the curve. The polarization in this angle range is most sensitive to the particle size. The method of separation of scattering on cloud particles from the twilight background is presented. Results are compared with T-matrix simulations for different sizes and shapes of ice particles; the best-fit model radius of particles (0.06 μm) and maximum radius (about 0.1 μm) are estimated.
Characteristics of atmospheric carbon monoxide at a high-mountain background station in East Asia
NASA Astrophysics Data System (ADS)
Ou-Yang, Chang-Feng; Lin, Neng-Huei; Lin, Chia-Ching; Wang, Sheng-Hsiang; Sheu, Guey-Rong; Lee, Chung-Te; Schnell, Russell C.; Lang, Patricia M.; Kawasato, Taro; Wang, Jia-Lin
2014-06-01
Atmospheric CO were monitored at the Lulin Atmospheric Background Station (LABS) with an elevation of 2862 m AMSL from April 2006 to April 2011 by the in-situ non-dispersive infrared (NDIR) spectrometer and weekly flask sample collections via collaboration with NOAA/ESRL/GMD. In general very coherent results were observed between the two datasets, despite a slight difference between the two. A distinct seasonal pattern of CO was noticed at the LABS with a springtime maximum and a summertime minimum, which was predominately shaped by the long-range transport of biomass burning air masses from Southeast Asia and oceanic influences from the Pacific, respectively. Diurnal cycles were also observed at the LABS, with a maximum in late afternoon and a minimum in early morning. The daytime CO maximum was most likely caused by the up-slope transport of lower elevation air. After filtering out the possibly polluted data points from the entire dataset with a mathematic procedure, the mean background CO level at the LABS was assessed as 129.3 ± 46.6 ppb, compared to 149.0 ± 72.2 ppb prior to the filtering. The cluster analysis of the backward trajectories revealed six possible source regions, which shows that air masses originating from the Westerly Wind Zone were dominated in spring and winter resulting in higher CO concentrations. As a contrast, the oceanic influences from the Pacific were found mostly in summer, contributing a lower seasonal CO concentration throughout a year.
The Top-of-Instrument corrections for nuclei with AMS on the Space Station
NASA Astrophysics Data System (ADS)
Ferris, N. G.; Heil, M.
2018-05-01
The Alpha Magnetic Spectrometer (AMS) is a large acceptance, high precision magnetic spectrometer on the International Space Station (ISS). The top-of-instrument correction for nuclei flux measurements with AMS accounts for backgrounds due to the fragmentation of nuclei with higher charge. Upon entry in the detector, nuclei may interact with AMS materials and split into fragments of lower charge based on their cross-section. The redundancy of charge measurements along the particle trajectory with AMS allows for the determination of inelastic interactions and for the selection of high purity nuclei samples with small uncertainties. The top-of-instrument corrections for nuclei with 2 < Z ≤ 6 are presented.
Wigg, Jonathan P.; Zhang, Hong; Yang, Dong
2015-01-01
Introduction In-vivo imaging of choroidal neovascularization (CNV) has been increasingly recognized as a valuable tool in the investigation of age-related macular degeneration (AMD) in both clinical and basic research applications. Arguably the most widely utilised model replicating AMD is laser generated CNV by rupture of Bruch’s membrane in rodents. Heretofore CNV evaluation via in-vivo imaging techniques has been hamstrung by a lack of appropriate rodent fundus camera and a non-standardised analysis method. The aim of this study was to establish a simple, quantifiable method of fluorescein fundus angiogram (FFA) image analysis for CNV lesions. Methods Laser was applied to 32 Brown Norway Rats; FFA images were taken using a rodent specific fundus camera (Micron III, Phoenix Laboratories) over 3 weeks and compared to conventional ex-vivo CNV assessment. FFA images acquired with fluorescein administered by intraperitoneal injection and intravenous injection were compared and shown to greatly influence lesion properties. Utilising commonly used software packages, FFA images were assessed for CNV and chorioretinal burns lesion area by manually outlining the maximum border of each lesion and normalising against the optic nerve head. Net fluorescence above background and derived value of area corrected lesion intensity were calculated. Results CNV lesions of rats treated with anti-VEGF antibody were significantly smaller in normalised lesion area (p<0.001) and fluorescent intensity (p<0.001) than the PBS treated control two weeks post laser. The calculated area corrected lesion intensity was significantly smaller (p<0.001) in anti-VEGF treated animals at 2 and 3 weeks post laser. The results obtained using FFA correlated with, and were confirmed by conventional lesion area measurements from isolectin stained choroidal flatmounts, where lesions of anti-VEGF treated rats were significantly smaller at 2 weeks (p = 0.049) and 3 weeks (p<0.001) post laser. Conclusion The presented method of in-vivo FFA quantification of CNV, including acquisition variable corrections, using the Micron III system and common use software establishes a reliable method for detecting and quantifying CNV enabling longitudinal studies and represents an important alternative to conventional CNV quantification methods. PMID:26024231
CONSTRAINTS ON HYBRID METRIC-PALATINI GRAVITY FROM BACKGROUND EVOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lima, N. A.; Barreto, V. S., E-mail: ndal@roe.ac.uk, E-mail: vsm@roe.ac.uk
2016-02-20
In this work, we introduce two models of the hybrid metric-Palatini theory of gravitation. We explore their background evolution, showing explicitly that one recovers standard General Relativity with an effective cosmological constant at late times. This happens because the Palatini Ricci scalar evolves toward and asymptotically settles at the minimum of its effective potential during cosmological evolution. We then use a combination of cosmic microwave background, supernovae, and baryonic accoustic oscillations background data to constrain the models’ free parameters. For both models, we are able to constrain the maximum deviation from the gravitational constant G one can have at earlymore » times to be around 1%.« less
Lacosamide cardiac safety: a thorough QT/QTc trial in healthy volunteers.
Kropeit, D; Johnson, M; Cawello, W; Rudd, G D; Horstmann, R
2015-11-01
To determine whether lacosamide prolongs the corrected QT interval (QTc). In this randomized, double-blind, positive- and placebo-controlled, parallel-design trial, healthy volunteers were randomized to lacosamide 400 mg/day (maximum-recommended daily dose, 6 days), lacosamide 800 mg/day (supratherapeutic dose, 6 days), placebo (6 days), or moxifloxacin 400 mg/day (3 days). Variables included maximum time-matched change from baseline in QT interval individually corrected for heart rate ([HR] QTcI), other ECG parameters, pharmacokinetics (PK), and safety/tolerability. The QTcI mean maximum difference from placebo was -4.3 ms and -6.3 ms for lacosamide 400 and 800 mg/day; upper limits of the 2-sided 90% confidence interval were below the 10 ms non-inferiority margin (-0.5 and -2.5 ms, respectively). Placebo-corrected QTcI for moxifloxacin was +10.4 ms (lower 90% confidence bound >0 [6.6 ms]), which established assay sensitivity for this trial. As lacosamide did not increase QTcI, the trial is considered a negative QTc trial. There was no dose-related or clinically relevant effect on QRS duration. HR increased from baseline by ~5 bpm with lacosamide 800 mg/day versus placebo. Placebo-subtracted mean increases in PR interval at tmax were 7.3 ms (400 mg/day) and 11.9 ms (800 mg/day). There were no findings of second-degree or higher atrioventricular block. Adverse events (AEs) were dose related and most commonly involved the nervous and gastrointestinal systems. Lacosamide (≤ 800 mg/day) did not prolong the QTc interval. Lacosamide caused a small, dose-related increase in mean PR interval that was not associated with AEs. Cardiac, overall safety, and PK profiles for lacosamide in healthy volunteers were consistent with those observed in patients with partial-onset seizures. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
2012-01-01
Background Production of correctly disulfide bonded proteins to high yields remains a challenge. Recombinant protein expression in Escherichia coli is the popular choice, especially within the research community. While there is an ever growing demand for new expression strains, few strains are dedicated to post-translational modifications, such as disulfide bond formation. Thus, new protein expression strains must be engineered and the parameters involved in producing disulfide bonded proteins must be understood. Results We have engineered a new E. coli protein expression strain named SHuffle, dedicated to producing correctly disulfide bonded active proteins to high yields within its cytoplasm. This strain is based on the trxB gor suppressor strain SMG96 where its cytoplasmic reductive pathways have been diminished, allowing for the formation of disulfide bonds in the cytoplasm. We have further engineered a major improvement by integrating into its chromosome a signal sequenceless disulfide bond isomerase, DsbC. We probed the redox state of DsbC in the oxidizing cytoplasm and evaluated its role in assisting the formation of correctly folded multi-disulfide bonded proteins. We optimized protein expression conditions, varying temperature, induction conditions, strain background and the co-expression of various helper proteins. We found that temperature has the biggest impact on improving yields and that the E. coli B strain background of this strain was superior to the K12 version. We also discovered that auto-expression of substrate target proteins using this strain resulted in higher yields of active pure protein. Finally, we found that co-expression of mutant thioredoxins and PDI homologs improved yields of various substrate proteins. Conclusions This work is the first extensive characterization of the trxB gor suppressor strain. The results presented should help researchers design the appropriate protein expression conditions using SHuffle strains. PMID:22569138
Concentrating Solar Power Projects - Maricopa Solar Project | Concentrating
Turbine Capacity: Net: 1.5 MW Gross: 1.5 MW Status: Currently Non-Operational Start Year: 2010 Do you have more information, corrections, or comments? Background Technology: Dish/Engine Status: Currently Non
Concentrating Solar Power Projects - Sierra SunTower | Concentrating Solar
Turbine Capacity: Net: 5.0 MW Gross: 5.0 MW Status: Currently Non-Operational Start Year: 2009 Do you have more information, corrections, or comments? Background Technology: Power tower Status: Currently Non
Concentrating Solar Power Projects - SunCan Dunhuang 10 MW Phase I |
: Operational Start Year: 2016 Do you have more information, corrections, or comments? Background Technology Break Ground: August 30, 2014 Start Production: December 26, 2016 Cost (approx): 420 RMB million
Concentrating Solar Power Projects - Huanghe Qinghai Delingha 135 MW DSG
development Start Year: 2017 Do you have more information, corrections, or comments? Background Technology PDF Break Ground: 2015 Start Production: 2017 PPA/Tariff Date: September 1, 2016 PPA/Tariff Type: Feed
Concentrating Solar Power Projects - IRESEN 1 MWe CSP-ORC pilot project |
Start Year: 2016 Do you have more information, corrections, or comments? Background Technology: Linear : 1,700 MWh/yr Contact(s): Webmaster Solar Break Ground: 2015 Start Production: September 2016 Cost
Doppler tracking in time-dependent cosmological spacetimes
NASA Astrophysics Data System (ADS)
Giulini, Domenico; Carrera, Matteo
I will discuss the theoretical problems associated with Doppler tracking in time dependent background geometries, where ordinary Newtonian kinematics fails. A derivation of an exact general-relativistic formula for the two-way Doppler tracking of a spacecraft in homogeneous and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetimes is presented, as well as a controlled approximation in McVittie spacetimes representing an FLRW background with a single spherically-symmetric inhomogeneity (e.g. a single star or black hole). The leading-order corrections of the acceleration as compared to the Newtonian expression are calculated, which are due to retardation and cosmological expansion and which in the Solar System turn out to be significantly below the scale (nanometer per square-second) set by the Pioneer Anomaly. Last, but not least, I discuss kinematical ambiguities connected with notions of "simultaneity" and "spatial distance", which, in principle, also lead to tracking corrections.
Gilmore, Adam Matthew
2014-01-01
Contemporary spectrofluorimeters comprise exciting light sources, excitation and emission monochromators, and detectors that without correction yield data not conforming to an ideal spectral response. The correction of the spectral properties of the exciting and emission light paths first requires calibration of the wavelength and spectral accuracy. The exciting beam path can be corrected up to the sample position using a spectrally corrected reference detection system. The corrected reference response accounts for both the spectral intensity and drift of the exciting light source relative to emission and/or transmission detector responses. The emission detection path must also be corrected for the combined spectral bias of the sample compartment optics, emission monochromator, and detector. There are several crucial issues associated with both excitation and emission correction including the requirement to account for spectral band-pass and resolution, optical band-pass or neutral density filters, and the position and direction of polarizing elements in the light paths. In addition, secondary correction factors are described including (1) subtraction of the solvent's fluorescence background, (2) removal of Rayleigh and Raman scattering lines, as well as (3) correcting for sample concentration-dependent inner-filter effects. The importance of the National Institute of Standards and Technology (NIST) traceable calibration and correction protocols is explained in light of valid intra- and interlaboratory studies and effective spectral qualitative and quantitative analyses including multivariate spectral modeling.
Optimal quantum error correcting codes from absolutely maximally entangled states
NASA Astrophysics Data System (ADS)
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Service contract of Renault Kerax 440 truck with deductible and policy limit coverage modification
NASA Astrophysics Data System (ADS)
Bustami, Pasaribu, Udjianna. S.; Husniah, Hennie
2016-02-01
In this paper we discuss a service contracts with coverage modification that only offer preventive maintenance and corrective maintenance for Renault Kerax 440 Truck by service contract provider. Corrective maintenance costs is modified with deductible and policy limit during the period of the service contract. Demand for a service contract is only influenced by the price of the service contract, deductible, and policy limit offered by producer to consumer. The main problem in this thesis is determining the price of a service contract, deductible, and policy limit to get maximum profit for producer for each of service contract.
From local to global measurements of nonclassical nonlinear elastic effects in geomaterials
Lott, Martin; Remillieux, Marcel C.; Le Bas, Pierre-Yves; ...
2016-09-07
Here, the equivalence between local and global measures of nonclassical nonlinear elasticity is established in a slender resonant bar. Nonlinear effects are first measured globally using nonlinear resonance ultrasound spectroscopy (NRUS), which monitors the relative shift of the resonance frequency as a function of the maximum dynamic strain in the sample. Subsequently, nonlinear effects are measured locally at various positions along the sample using dynamic acousto elasticity testing (DAET). Finally, after correcting analytically the DAET data for three-dimensional strain effects and integrating numerically these corrected data along the length of the sample, the NRUS global measures are retrieved almost exactly.
Impact of a primordial magnetic field on cosmic microwave background B modes with weak lensing
NASA Astrophysics Data System (ADS)
Yamazaki, Dai G.
2018-05-01
We discuss the manner in which the primordial magnetic field (PMF) suppresses the cosmic microwave background (CMB) B mode due to the weak-lensing (WL) effect. The WL effect depends on the lensing potential (LP) caused by matter perturbations, the distribution of which at cosmological scales is given by the matter power spectrum (MPS). Therefore, the WL effect on the CMB B mode is affected by the MPS. Considering the effect of the ensemble average energy density of the PMF, which we call "the background PMF," on the MPS, the amplitude of MPS is suppressed in the wave number range of k >0.01 h Mpc-1 . The MPS affects the LP and the WL effect in the CMB B mode; however, the PMF can damp this effect. Previous studies of the CMB B mode with the PMF have only considered the vector and tensor modes. These modes boost the CMB B mode in the multipole range of ℓ>1000 , whereas the background PMF damps the CMB B mode owing to the WL effect in the entire multipole range. The matter density in the Universe controls the WL effect. Therefore, when we constrain the PMF and the matter density parameters from cosmological observational data sets, including the CMB B mode, we expect degeneracy between these parameters. The CMB B mode also provides important information on the background gravitational waves, inflation theory, matter density fluctuations, and the structure formations at the cosmological scale through the cosmological parameter search. If we study these topics and correctly constrain the cosmological parameters from cosmological observations, including the CMB B mode, we need to correctly consider the background PMF.
Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy
NASA Astrophysics Data System (ADS)
Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.
2008-04-01
Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.
NASA Technical Reports Server (NTRS)
Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard
2013-01-01
Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.
NASA Astrophysics Data System (ADS)
Tulasi Ram, S.; Ajith, K. K.; Yokoyama, T.; Yamamoto, M.; Niranjan, K.
2017-06-01
The vertical rise velocity (Vr) and maximum altitude (Hm) of equatorial plasma bubbles (EPBs) were estimated using the two-dimensional fan sector maps of 47 MHz Equatorial Atmosphere Radar (EAR), Kototabang, during May 2010 to April 2013. A total of 86 EPBs were observed out of which 68 were postsunset EPBs and remaining 18 EPBs were observed around midnight hours. The vertical rise velocities of the EPBs observed around the midnight hours are significantly smaller ( 26-128 m/s) compared to those observed in postsunset hours ( 45-265 m/s). Further, the vertical growth of the EPBs around midnight hours ceases at relatively lower altitudes, whereas the majority of EPBs at postsunset hours found to have grown beyond the maximum detectable altitude of the EAR. The three-dimensional numerical high-resolution bubble (HIRB) model with varying background conditions are employed to investigate the possible factors that control the vertical rise velocity and maximum attainable altitudes of EPBs. The estimated rise velocities from EAR observations at both postsunset and midnight hours are, in general, consistent with the nonlinear evolution of EPBs from the HIRB model. The smaller vertical rise velocities (Vr) and lower maximum altitudes (Hm) of EPBs during midnight hours are discussed in terms of weak polarization electric fields within the bubble due to weaker background electric fields and reduced background ion density levels.
Lefave, Melissa; Harrell, Brad; Wright, Molly
2016-06-01
The purpose of this project was to assess the ability of anesthesiologists, nurse anesthetists, and registered nurses to correctly identify anatomic landmarks of cricoid pressure and apply the correct amount of force. The project included an educational intervention with one group pretest-post-test design. Participants demonstrated cricoid pressure on a laryngotracheal model. After an educational intervention video, participants were asked to repeat cricoid pressure on the model. Participants with a nurse anesthesia background applied more appropriate force pretest than other participants; however, post-test results, while improved, showed no significant difference among providers. Participant identification of the correct anatomy of the cricoid cartilage and application of correct force were significantly improved after education. This study revealed that participants lacked prior knowledge of correct cricoid anatomy and pressure as well as the ability to apply correct force to the laryngotracheal model before an educational intervention. The intervention used in this study proved successful in educating health care providers. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.
VizieR Online Data Catalog: PACS photometry of FIR faint stars (Klaas+, 2018)
NASA Astrophysics Data System (ADS)
Klaas, U.; Balog, Z.; Nielbock, M.; Mueller, T. G.; Linz, H.; Kiss, Cs.
2018-01-01
70, 100 and 160um photometry of FIR faint stars from PACS scan map and chop/nod measurements. For scan maps also the photometry of the combined scan and cross-scan maps (at 160um there are usually two scan and cross-scan maps each as complements to the 70 and 100um maps) is given. Note: Not all stars have measured fluxes in all three filters. Scan maps: The main observing mode was the point-source mini-scan-map mode; selected scan map parameters are given in column mparam. An outline of the data processing using the high-pass filter (HPF) method is presented in Balog et al. (2014ExA....37..129B). Processing proceeded from Herschel Science Archive SPG v13.1.0 level 1 products with HIPE version 15 build 165 for 70 and 100um maps and from Herschel Science Archive SPG v14.2.0 level 1 products with HIPE version 15 build 1480 for 160um maps. Fluxes faper were obtained by aperture photometry with aperture radii of 5.6, 6.8 and 10.7 arcsec for the 70, 100 and 160um filter, respectively. Noise per pixel sigpix was determined with the histogram method, described in this paper, for coverage values greater than or equal to 0.5*maximum coverage. The number of map pixels (1.1, 1.4, and 2.1 arcsec pixel size, respectively) inside the photometric aperture is Naper = 81.42, 74.12, and 81.56, respectively. The corresponding correction factors for correlated noise are fcorr = 3.13, 2.76, and 4.12, respectively. The noise for the photometric aperture is calculated as sig_aper=sqrt(Naper)*fcorr*sigpix. Signal-to-noise ratios are determined as S/N=faper/sigaper. Aperture-correction factors to derive the total flux are caper = 1.61, 1.56 and 1.56 for the 70, 100 and 160um filter, respectively. Applied colour-correction factors for a 5000K black-body SED are cc = 1.016, 1.033, and 1.074 for the 70, 100, and 160um filter, respectively. The final stellar flux is derived as fstar=faper*caper/cc. Maximum and minimum FWHM of the star PSF are determined by an elliptical fit of the intensity profile. Chop/nod observations: The chop/nod point-source mode is described in this paper. An outline of the data processing is presented in Nielbock et al. (2013ExA....36..631N). Processing proceeded from Herschel Science Archive SPG v11.1.0 level 1 products with HIPE version 13 build 2768. Gyro correction was applied for most of the cases to improve the pointing reconstruction performance. Fluxes faper were obtained by aperture photometry with aperture radii of 5.6, 6.8 and 10.7 arcsec for the 70, 100 and 160um filter, respectively. Noise per pixel sigpix was determined with the histogram method, described in this paper, for coverage values greater than or equal to 0.5*maximum coverage. The number of map pixels (1.1, 1.4, and 2.1 arcsec pixel size, respectively) inside the photometric aperture is Naper = 81.42, 74.12, and 81.56, respectively. The corresponding correction factors for correlated noise are fcorr = 6.33, 4.22, and 7.81, respectively. The noise for the photometric aperture is calculated as sigaper=sqrt(Naper)*fcorr*sigpix. Signal-to-noise ratios are determined as S/N=faper/sigaper. Aperture-correction factors to derive the total flux are caper = 1.61, 1.56 and 1.56 for the 70, 100 and 160um filter, respectively. Applied colour-correction factors for a 5000K black-body SED are cc = 1.016, 1.033, and 1.074 for the 70, 100, and 160um filter, respectively. Maximum and minimum FWHM of the star PSF are determined by an elliptical fit of the intensity profile. (7 data files).
Finnish nurses' and nursing students' pharmacological skills.
Grandell-Niemi, Heidi; Hupli, Maija; Leino-Kilpi, Helena; Puukka, Pauli
2005-07-01
PURPOSES AND OBJECTIVES: The purposes of this study were to investigate the pharmacological skills of Finnish nurses and graduating nursing students, to determine how pharmacological skills are related to background factors and to identify differences between nurses and students and, finally, to examine how the instrument used, the Medication Calculation Skills Test, works. Pharmacology is a relevant and topical subject. In several studies, however, pharmacological skills of nurses and nursing students have been found insufficient. In addition, pharmacology as a subject is found to be difficult for both nursing students and nurses. The study was evaluative in nature; the data were collected using the Medication Calculation Skills Test, developed for the purposes of this study. The instrument was used to gather information on background factors and self-rated pharmacological and mathematical skills and to test actual skills in these areas. Results concerning pharmacological skills are reported in this paper. The maximum Medication Calculation Skills Test score was 24 points. The mean score for nurses was 18.6 and that for students 16.3. Half of (50%) the students attained a score of 67% and 57% of nurses attained a score of 79%. Nurses and students had some deficiencies in their pharmacological skills. Nurses had better pharmacological skills than students according to both self-ratings and actual performance on the test. It is vitally important that nurses have adequate pharmacological skills to administer medicines correctly. This study showed that the Medication Calculation Skills Test seems to work well in measuring pharmacological skills, even though it needs further evaluation. Findings from this study can be used when planning the nursing curriculum and further education for Registered Nurses.
Montes, Carlos; Tamayo, Pilar; Hernandez, Jorge; Gomez-Caminero, Felipe; García, Sofia; Martín, Carlos; Rosero, Angela
2013-08-01
Hybrid imaging, such as SPECT/CT, is used in routine clinical practice, allowing coregistered images of the functional and structural information provided by the two imaging modalities. However, this multimodality imaging may mean that patients are exposed to a higher radiation dose than those receiving SPECT alone. The study aimed to determine the radiation exposure of patients who had undergone SPECT/CT examinations and to relate this to the Background Equivalent Radiation Time (BERT). 145 SPECT/CT studies were used to estimate the total effective dose to patients due to both radiopharmaceutical administrations and low-dose CT scans. The CT contribution was estimated by the Dose-Length Product method. Specific conversion coefficients were calculated for SPECT explorations. The radiation dose from low-dose CTs ranged between 0.6 mSv for head and neck CT and 2.6 mSv for whole body CT scan, representing a maximum of 1 year of background radiation exposure. These values represent a decrease of 80-85% with respect to the radiation dose from diagnostic CT. The radiation exposure from radiopharmaceutical administration varied from 2.1 mSv for stress myocardial perfusion SPECT to 26 mSv for gallium SPECT in patients with lymphoma. The BERT ranged from 1 to 11 years. The contribution of low-dose CT scans to the total radiation dose to patients undergoing SPECT/CT examinations is relatively low compared with the effective dose from radiopharmaceutical administration. When a CT scan is only acquired for anatomical localization and attenuation correction, low-dose CT scan is justified on the basis of its lower dose.
Quantum Gravity Effects on Hawking Radiation of Schwarzschild-de Sitter Black Holes
NASA Astrophysics Data System (ADS)
Singh, T. Ibungochouba; Meitei, I. Ablu; Singh, K. Yugindro
2017-08-01
The correction of Hawking temperature of Schwarzschild-de Sitter (SdS) black hole is investigated using the generalized Klein-Gordon equation and the generalized Dirac equation by taking the quantum gravity effects into account. We derive the corrected Hawking temperatures for scalar particles and fermions crossing the event horizon. The quantum gravity effects prevent the rise of temperature in the SdS black hole. Besides correction of Hawking temperature, the Hawking radiation of SdS black hole is also investigated using massive particles tunneling method. By considering self gravitation effect of the emitted particles and the space time background to be dynamical, it is also shown that the tunneling rate is related to the change of Bekenstein-Hawking entropy and small correction term (1 + 2 β m 2). If the energy and the angular momentum are taken to be conserved, the derived emission spectrum deviates from the pure thermal spectrum. This result gives a correction to the Hawking radiation and is also in agreement with the result of Parikh and Wilczek.
Novel Principles and Techniques to Create a Natural Design in Female Hairline Correction Surgery
2015-01-01
Abstract Background: Female hairline correction surgery is becoming increasingly popular. However, no guidelines or methods of female hairline design have been introduced to date. Methods: The purpose of this study was to create an initial framework based on the novel principles of female hairline design and then use artistic ability and experience to fine tune this framework. An understanding of the concept of 5 areas (frontal area, frontotemporal recess area, temporal peak, infratemple area, and sideburns) and 5 points (C, A, B, T, and S) is required for female hairline correction surgery (the 5A5P principle). The general concepts of female hairline correction surgery and natural design methods are, herein, explained with a focus on the correlations between these 5 areas and 5 points. Results: A natural and aesthetic female hairline can be created with application of the above-mentioned concepts. Conclusion: The 5A5P principle of forming the female hairline is very useful in female hairline correction surgery. PMID:26894014
The effect of a scanning flat fold mirror on a cosmic microwave background B-mode experiment.
Grainger, William F; North, Chris E; Ade, Peter A R
2011-06-01
We investigate the possibility of using a flat-fold beam steering mirror for a cosmic microwave background B-mode experiment. An aluminium flat-fold mirror is found to add ∼0.075% polarization, which varies in a scan synchronous way. Time-domain simulations of a realistic scanning pattern are performed, and the effect on the power-spectrum illustrated, and a possible method of correction applied. © 2011 American Institute of Physics
NASA Technical Reports Server (NTRS)
Kramer, Max
1932-01-01
Wind-tunnel tests are described, in which the angle of attack of a wing model was suddenly increased (producing the effect of a vertical gust) and the resulting forces were measured. It was found that the maximum lift coefficient increases in proportion to the rate of increase in the angle of attack. This fact is important for the determination of the gust stresses of airplanes with low wing loading. The results of the calculation of the corrective factor are given for a high-performance glider and a light sport plane of conventional type.
NASA Astrophysics Data System (ADS)
Lim, Jeong Sik; Park, Miyeon; Lee, Jinbok; Lee, Jeongsoon
2017-12-01
The effect of background gas composition on the measurement of CO2 levels was investigated by wavelength-scanned cavity ring-down spectrometry (WS-CRDS) employing a spectral line centered at the R(1) of the (3 00 1)III ← (0 0 0) band. For this purpose, eight cylinders with various gas compositions were gravimetrically and volumetrically prepared within 2σ = 0.1 %, and these gas mixtures were introduced into the WS-CRDS analyzer calibrated against standards of ambient air composition. Depending on the gas composition, deviations between CRDS-determined and gravimetrically (or volumetrically) assigned CO2 concentrations ranged from -9.77 to 5.36 µmol mol-1, e.g., excess N2 exhibited a negative deviation, whereas excess Ar showed a positive one. The total pressure broadening coefficients (TPBCs) obtained from the composition of N2, O2, and Ar thoroughly corrected the deviations up to -0.5 to 0.6 µmol mol-1, while these values were -0.43 to 1.43 µmol mol-1 considering PBCs induced by only N2. The use of TPBC enhanced deviations to be corrected to ˜ 0.15 %. Furthermore, the above correction linearly shifted CRDS responses for a large extent of TPBCs ranging from 0.065 to 0.081 cm-1 atm-1. Thus, accurate measurements using optical intensity-based techniques such as WS-CRDS require TPBC-based instrument calibration or use standards prepared in the same background composition of ambient air.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Elsayed
Purpose: To characterize and correct for radiation-induced background (RIB) observed in the signals from a class of scanning water tanks. Methods: A method was developed to isolate the RIB through detector measurements in the background-free linac console area. Variation of the RIB against a large number of parameters was characterized, and its impact on basic clinical data for photon and electron beams was quantified. Different methods to minimize and/or correct for the RIB were proposed and evaluated. Results: The RIB is due to the presence of the electrometer and connection box in a low background radiation field (by design). Themore » absolute RIB current with a biased detector is up to 2 pA, independent of the detector size, which is 0.6% and 1.5% of the central axis reference signal for a standard and a mini scanning chamber, respectively. The RIB monotonically increases with field size, is three times smaller for detectors that do not require a bias (e.g., diodes), is up to 80% larger for positive (versus negative) polarity, decreases with increasing photon energy, exhibits a single curve versus dose rate at the electrometer location, and is negligible for electron beams. Data after the proposed field-size correction method agree with point measurements from an independent system to within a few tenth of a percent for output factor, head scatter, depth dose at depth, and out-of-field profile dose. Manufacturer recommendations for electrometer placement are insufficient and sometimes incorrect. Conclusions: RIB in scanning water tanks can have a non-negligible effect on dosimetric data.« less
Motion Artifact Reduction in Pediatric Diffusion Tensor Imaging Using Fast Prospective Correction
Alhamud, A.; Taylor, Paul A.; Laughton, Barbara; van der Kouwe, André J.W.; Meintjes, Ernesta M.
2014-01-01
Purpose To evaluate the patterns of head motion in scans of young children and to examine the influence of corrective techniques, both qualitatively and quantitatively. We investigate changes that both retrospective (with and without diffusion table reorientation) and prospective (implemented with a short navigator sequence) motion correction induce in the resulting diffusion tensor measures. Materials and Methods Eighteen pediatric subjects (aged 5–6 years) were scanned using 1) a twice-refocused, 2D diffusion pulse sequence, 2) a prospectively motion-corrected, navigated diffusion sequence with reacquisition of a maximum of five corrupted diffusion volumes, and 3) a T1-weighted structural image. Mean fractional anisotropy (FA) values in white and gray matter regions, as well as tractography in the brainstem and projection fibers, were evaluated to assess differences arising from retrospective (via FLIRT in FSL) and prospective motion correction. In addition to human scans, a stationary phantom was also used for further evaluation. Results In several white and gray matter regions retrospective correction led to significantly (P < 0.05) reduced FA means and altered distributions compared to the navigated sequence. Spurious tractographic changes in the retrospectively corrected data were also observed in subject data, as well as in phantom and simulated data. Conclusion Due to the heterogeneity of brain structures and the comparatively low resolution (~2 mm) of diffusion data using 2D single shot sequencing, retrospective motion correction is susceptible to distortion from partial voluming. These changes often negatively bias diffusion tensor imaging parameters. Prospective motion correction was shown to produce smaller changes. PMID:24935904
Motion artifact reduction in pediatric diffusion tensor imaging using fast prospective correction.
Alhamud, A; Taylor, Paul A; Laughton, Barbara; van der Kouwe, André J W; Meintjes, Ernesta M
2015-05-01
To evaluate the patterns of head motion in scans of young children and to examine the influence of corrective techniques, both qualitatively and quantitatively. We investigate changes that both retrospective (with and without diffusion table reorientation) and prospective (implemented with a short navigator sequence) motion correction induce in the resulting diffusion tensor measures. Eighteen pediatric subjects (aged 5-6 years) were scanned using 1) a twice-refocused, 2D diffusion pulse sequence, 2) a prospectively motion-corrected, navigated diffusion sequence with reacquisition of a maximum of five corrupted diffusion volumes, and 3) a T1 -weighted structural image. Mean fractional anisotropy (FA) values in white and gray matter regions, as well as tractography in the brainstem and projection fibers, were evaluated to assess differences arising from retrospective (via FLIRT in FSL) and prospective motion correction. In addition to human scans, a stationary phantom was also used for further evaluation. In several white and gray matter regions retrospective correction led to significantly (P < 0.05) reduced FA means and altered distributions compared to the navigated sequence. Spurious tractographic changes in the retrospectively corrected data were also observed in subject data, as well as in phantom and simulated data. Due to the heterogeneity of brain structures and the comparatively low resolution (∼2 mm) of diffusion data using 2D single shot sequencing, retrospective motion correction is susceptible to distortion from partial voluming. These changes often negatively bias diffusion tensor imaging parameters. Prospective motion correction was shown to produce smaller changes. © 2014 Wiley Periodicals, Inc.