Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei
2010-08-01
During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
NASA Astrophysics Data System (ADS)
Itoh, Naoki; Nozawa, Satoshi; Kohyama, Yasuharu
2000-04-01
We extend the formalism of relativistic thermal and kinematic Sunyaev-Zeldovich effects and include the polarization of the cosmic microwave background photons. We consider the situation of a cluster of galaxies moving with a velocity β≡v/c with respect to the cosmic microwave background radiation. In the present formalism, polarization of the scattered cosmic microwave background radiation caused by the proper motion of a cluster of galaxies is naturally derived as a special case of the kinematic Sunyaev-Zeldovich effect. The relativistic corrections are also included in a natural way. Our results are in complete agreement with the recent results of relativistic corrections obtained by Challinor, Ford, & Lasenby with an entirely different method, as well as the nonrelativistic limit obtained by Sunyaev & Zeldovich. The relativistic correction becomes significant in the Wien region.
Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W
2012-09-07
A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.
Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin
2013-08-09
Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
Environmental corrections of a dual-induction logging while drilling tool in vertical wells
NASA Astrophysics Data System (ADS)
Kang, Zhengming; Ke, Shizhen; Jiang, Ming; Yin, Chengfang; Li, Anzong; Li, Junjian
2018-04-01
With the development of Logging While Drilling (LWD) technology, dual-induction LWD logging is not only widely applied in deviated wells and horizontal wells, but it is used commonly in vertical wells. Accordingly, it is necessary to simulate the response of LWD tools in vertical wells for logging interpretation. In this paper, the investigation characteristics, the effects of the tool structure, skin effect and drilling environment of a dual-induction LWD tool are simulated by the three-dimensional (3D) finite element method (FEM). In order to closely simulate the actual situation, real structure of the tool is taking into account. The results demonstrate that the influence of the background value of the tool structure can be eliminated. The values of deducting the background of a tool structure and analytical solution have a quantitative agreement in homogeneous formations. The effect of measurement frequency could be effectively eliminated by chart of skin effect correction. In addition, the measurement environment, borehole size, mud resistivity, shoulder bed, layer thickness and invasion, have an effect on the true resistivity. To eliminate these effects, borehole correction charts, shoulder bed correction charts and tornado charts are computed based on real tool structure. Based on correction charts, well logging data can be corrected automatically by a suitable interpolation method, which is convenient and fast. Verified with actual logging data in vertical wells, this method could obtain the true resistivity of formation.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs
Esmonde-White, Francis W. L.; Esmonde-White, Karen A.; Morris, Michael D.
2010-01-01
Projective transformation is a mathematical correction (implemented in software) used in the remote imaging field to produce distortion-free images. We present the application of projective transformation to correct minor alignment and astigmatism distortions that are inherent in dispersive spectrographs. Patterned white-light images and neon emission spectra were used to produce registration points for the transformation. Raman transects collected on microscopy and fiber-optic systems were corrected using established methods and compared with the same transects corrected using the projective transformation. Even minor distortions have a significant effect on reproducibility and apparent fluorescence background complexity. Simulated Raman spectra were used to optimize the projective transformation algorithm. We demonstrate that the projective transformation reduced the apparent fluorescent background complexity and improved reproducibility of measured parameters of Raman spectra. Distortion correction using a projective transformation provides a major advantage in reducing the background fluorescence complexity even in instrumentation where slit-image distortions and camera rotation were minimized using manual or mechanical means. We expect these advantages should be readily applicable to other spectroscopic modalities using dispersive imaging spectrographs. PMID:21211158
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.
The location and recognition of anti-counterfeiting code image with complex background
NASA Astrophysics Data System (ADS)
Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping
2017-07-01
The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.
Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen
2016-06-01
High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.
Holographic corrections to meson scattering amplitudes
NASA Astrophysics Data System (ADS)
Armoni, Adi; Ireson, Edwin
2017-06-01
We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.
Effects of ocular aberrations on contrast detection in noise.
Liang, Bo; Liu, Rong; Dai, Yun; Zhou, Jiawei; Zhou, Yifeng; Zhang, Yudong
2012-08-06
We use adaptive optics (AO) techniques to manipulate the ocular aberrations and elucidate the effects of these ocular aberrations on contrast detection in a noisy background. The detectability of sine wave gratings at frequencies of 4, 8, and 16 circles per degree (cpd) was measured in a standard two-interval force-choice staircase procedure against backgrounds of various levels of white noise. The observer's ocular aberrations were either corrected with AO or left uncorrected. In low levels of external noise, contrast detection thresholds are always lowered by AO correction, whereas in high levels of external noise, they are generally elevated by AO correction. Higher levels of external noise are required to make this threshold elevation observable when signal spatial frequencies increase from 4 to 16 cpd. The linear-amplifier-model fit shows that mostly sampling efficiency and equivalent noise both decrease with AO correction. Our findings indicate that ocular aberrations could be beneficial for contrast detection in high-level noises. The implications of these findings are discussed.
HST/WFC3: Understanding and Mitigating Radiation Damage Effects in the CCD Detectors
NASA Astrophysics Data System (ADS)
Baggett, S.; Anderson, J.; Sosey, M.; MacKenty, J.; Gosmeyer, C.; Noeske, K.; Gunning, H.; Bourque, M.
2015-09-01
At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel resides a 4096x4096 pixel e2v CCD array. While these detectors are performing extremely well after more than 5 years in low-earth orbit, the cumulative effects of radiation damage cause a continual growth in the hot pixel population and a progressive loss in charge transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. Several mitigation options exist, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low background images for a relatively small noise penalty. Currently all WFC3 observers are encouraged to post-flash images with low backgrounds. Another powerful option in mitigating CTE losses is the pixel-based CTE correction. Analagous to the CTE correction software currently in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an empirical observationally-constrained model of how much charge is captured and released in order to reconstruct the image. Applied to images (with or without post-flash) after they are acquired, the software is currently available as a standalone routine. The correction will be incorporated into the standard WFC3 calibration pipeline.
Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc
2015-10-01
Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality as it allows to noninvasively monitor the fluorescence targeted tumors located below the tissue surface. Some drawbacks of this technique are the background fluorescence decreasing the contrast and absorption heterogeneities leading to misinterpretations concerning fluorescence concentrations. We propose a correction technique based on a laser line scanning illumination scheme. We scan the medium with the laser line and acquire, at each position of the line, both fluorescence and excitation images. We then use the finding that there is a relationship between the excitation intensity profile and the background fluorescence one to predict the amount of signal to subtract from the fluorescence images to get a better contrast. As the light absorption information is contained both in fluorescence and excitation images, this method also permits us to correct the effects of absorption heterogeneities. This technique has been validated on simulations and experimentally. Fluorescent inclusions are observed in several configurations at depths ranging from 1 mm to 1 cm. Results obtained with this technique are compared with those obtained with a classical wide-field detection scheme for contrast enhancement and with the fluorescence by an excitation ratio approach for absorption correction.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.
40 CFR 1065.650 - Emission calculations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... following sequence of preliminary calculations on recorded concentrations: (i) Correct all THC and CH4.... (iii) Calculate all THC and NMHC concentrations, including dilution air background concentrations, as... NMHC to background corrected mass of THC. If the background corrected mass of NMHC is greater than 0.98...
HST/WFC3: understanding and mitigating radiation damage effects in the CCD detectors
NASA Astrophysics Data System (ADS)
Baggett, S. M.; Anderson, J.; Sosey, M.; Gosmeyer, C.; Bourque, M.; Bajaj, V.; Khandrika, H.; Martlin, C.
2016-07-01
At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel is a 4096x4096 pixel e2v CCD array. While these detectors continue to perform extremely well after more than 7 years in low-earth orbit, the cumulative effects of radiation damage are becoming increasingly evident. The result is a continual increase of the hotpixel population and the progressive loss in charge-transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. In this report, we summarize the radiation damage effects seen in WFC3/UVIS and the evolution of the CTE losses as a function of time, source brightness, and image-background level. In addition, we discuss the available mitigation options, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low-background images for a relatively small noise penalty. Currently, all WFC3 observers are encouraged to consider post-flash for images with low backgrounds. Finally, a pixel-based CTE correction is available for use after the images have been acquired. Similar to the software in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an observationally-defined model of how much charge is captured and released in order to reconstruct the image. As of Feb 2016, the pixel-based CTE correction is part of the automated WFC3 calibration pipeline. Observers with pre-existing data may request their images from MAST (Mikulski Archive for Space Telescopes) to obtain the improved products.
NASA Astrophysics Data System (ADS)
Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun
2017-10-01
Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schurman, D.L.; Datesman, G.H. Jr; Truitt, J.O.
The report presents a system for evaluating and correcting deficiencies in security-force effectiveness in licensed nuclear facilities. There are four checklists which security managers can copy directly, or can use as guidelines for developing their own checklists. The checklists are keyed to corrective-action guides found in the body of the report. In addition to the corrective-action guides, the report gives background information on the nature of security systems and discussions of various special problems of the licensed nuclear industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
Nanowire growth kinetics in aberration corrected environmental transmission electron microscopy
Chou, Yi -Chia; Panciera, Federico; Reuter, Mark C.; ...
2016-03-15
Here, we visualize atomic level dynamics during Si nanowire growth using aberration corrected environmental transmission electron microscopy, and compare with lower pressure results from ultra-high vacuum microscopy. We discuss the importance of higher pressure observations for understanding growth mechanisms and describe protocols to minimize effects of the higher pressure background gas.
Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.
Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth
2017-05-26
In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.
Impact of a primordial magnetic field on cosmic microwave background B modes with weak lensing
NASA Astrophysics Data System (ADS)
Yamazaki, Dai G.
2018-05-01
We discuss the manner in which the primordial magnetic field (PMF) suppresses the cosmic microwave background (CMB) B mode due to the weak-lensing (WL) effect. The WL effect depends on the lensing potential (LP) caused by matter perturbations, the distribution of which at cosmological scales is given by the matter power spectrum (MPS). Therefore, the WL effect on the CMB B mode is affected by the MPS. Considering the effect of the ensemble average energy density of the PMF, which we call "the background PMF," on the MPS, the amplitude of MPS is suppressed in the wave number range of k >0.01 h Mpc-1 . The MPS affects the LP and the WL effect in the CMB B mode; however, the PMF can damp this effect. Previous studies of the CMB B mode with the PMF have only considered the vector and tensor modes. These modes boost the CMB B mode in the multipole range of ℓ>1000 , whereas the background PMF damps the CMB B mode owing to the WL effect in the entire multipole range. The matter density in the Universe controls the WL effect. Therefore, when we constrain the PMF and the matter density parameters from cosmological observational data sets, including the CMB B mode, we expect degeneracy between these parameters. The CMB B mode also provides important information on the background gravitational waves, inflation theory, matter density fluctuations, and the structure formations at the cosmological scale through the cosmological parameter search. If we study these topics and correctly constrain the cosmological parameters from cosmological observations, including the CMB B mode, we need to correctly consider the background PMF.
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Dilution air background emission...
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Dilution air background emission...
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Dilution air background emission...
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Dilution air background emission...
40 CFR 1065.667 - Dilution air background emission correction.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Dilution air background emission...
Loop corrections to primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Boran, Sibel; Kahya, E. O.
2018-02-01
We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... correction is effective July 7, 2010. FOR FURTHER INFORMATION CONTACT: Dr. Lisa Rotterman (907-271-1692), lisa[email protected] . SUPPLEMENTARY INFORMATION: Background On June 29, 2010, NMFS published a... lion (75 FR 37385). NMFS inadvertently gave incorrect e-mail and fax information. The correct email is...
Quantum corrections for spinning particles in de Sitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröb, Markus B.; Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu
We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalarmore » Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.« less
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
Calculation of background effects on the VESUVIO eV neutron spectrometer
NASA Astrophysics Data System (ADS)
Mayers, J.
2011-01-01
The VESUVIO spectrometer at the ISIS pulsed neutron source measures the momentum distribution n(p) of atoms by 'neutron Compton scattering' (NCS). Measurements of n(p) provide a unique window into the quantum behaviour of atomic nuclei in condensed matter systems. The VESUVIO 6Li-doped neutron detectors at forward scattering angles were replaced in February 2008 by yttrium aluminium perovskite (YAP)-doped γ-ray detectors. This paper compares the performance of the two detection systems. It is shown that the YAP detectors provide a much superior resolution and general performance, but suffer from a sample-dependent gamma background. This report details how this background can be calculated and data corrected. Calculation is compared with data for two different instrument geometries. Corrected and uncorrected data are also compared for the current instrument geometry. Some indications of how the gamma background can be reduced are also given.
Simple automatic strategy for background drift correction in chromatographic data analysis.
Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin
2016-06-03
Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ma, Tianyu; Shao, Yiping
2008-08-01
This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.
Quantum Gravity Effects on Hawking Radiation of Schwarzschild-de Sitter Black Holes
NASA Astrophysics Data System (ADS)
Singh, T. Ibungochouba; Meitei, I. Ablu; Singh, K. Yugindro
2017-08-01
The correction of Hawking temperature of Schwarzschild-de Sitter (SdS) black hole is investigated using the generalized Klein-Gordon equation and the generalized Dirac equation by taking the quantum gravity effects into account. We derive the corrected Hawking temperatures for scalar particles and fermions crossing the event horizon. The quantum gravity effects prevent the rise of temperature in the SdS black hole. Besides correction of Hawking temperature, the Hawking radiation of SdS black hole is also investigated using massive particles tunneling method. By considering self gravitation effect of the emitted particles and the space time background to be dynamical, it is also shown that the tunneling rate is related to the change of Bekenstein-Hawking entropy and small correction term (1 + 2 β m 2). If the energy and the angular momentum are taken to be conserved, the derived emission spectrum deviates from the pure thermal spectrum. This result gives a correction to the Hawking radiation and is also in agreement with the result of Parikh and Wilczek.
HST/WFC3: Evolution of the UVIS Channel's Charge Transfer Efficiency
NASA Astrophysics Data System (ADS)
Gosmeyer, Catherine; Baggett, Sylvia M.; Anderson, Jay; WFC3 Team
2016-06-01
The Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) contains both an IR and a UVIS channel. After more than six years on orbit, the UVIS channel performance remains stable; however, on-orbit radiation damage has caused the charge transfer efficiency (CTE) of UVIS's two CCDs to degrade. This degradation is seen as vertical charge 'bleeding' from sources during readout and its effect evolves as the CCDs age. The WFC3 team has developed software to perform corrections that push the charge back to the sources, although it cannot recover faint sources that have been bled out entirely. Observers can mitigate this effect in various ways such as by placing sources near the amplifiers, observing bright targets, and by increasing the total background to at least 12 electrons, either by using a broader filter, lengthening exposure time, or post-flashing. We present results from six years of calibration data to re-evaluate the best level of total background for mitigating CTE loss and to re-verify that the pixel-based CTE correction software is performing optimally over various background levels. In addition, we alert observers that CTE-corrected products are now available for retrieval from MAST as part of the CALWF3 v3.3 pipeline upgrade.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
Electrovacuum solutions in nonlocal gravity
NASA Astrophysics Data System (ADS)
Fernandes, Karan; Mitra, Arpita
2018-05-01
We consider the coupling of the electromagnetic field to a nonlocal gravity theory comprising of the Einstein-Hilbert action in addition to a nonlocal R □-2R term associated with a mass scale m . We demonstrate that in the case of the minimally coupled electromagnetic field, real corrections about the Reissner-Nordström background only exist between the inner Cauchy horizon and the event horizon of the black hole. This motivates us to consider the modified coupling of electromagnetism to this theory via the Kaluza ansatz. The Kaluza reduction introduces nonlocal terms involving the electromagnetic field to the pure gravitational nonlocal theory. An iterative approach is provided to perturbatively solve the equations of motion to arbitrary order in m2 about any known solution of general relativity. We derive the first-order corrections and demonstrate that the higher order corrections are real and perturbative about the external background of a Reissner-Nordström black hole. We also discuss how the Kaluza reduced action, through the inclusion of nonlocal electromagnetic fields, could also be relevant in quantum effects on curved backgrounds with horizons.
The effect of a scanning flat fold mirror on a cosmic microwave background B-mode experiment.
Grainger, William F; North, Chris E; Ade, Peter A R
2011-06-01
We investigate the possibility of using a flat-fold beam steering mirror for a cosmic microwave background B-mode experiment. An aluminium flat-fold mirror is found to add ∼0.075% polarization, which varies in a scan synchronous way. Time-domain simulations of a realistic scanning pattern are performed, and the effect on the power-spectrum illustrated, and a possible method of correction applied. © 2011 American Institute of Physics
Further Improvement of the RITS Code for Pulsed Neutron Bragg-edge Transmission Imaging
NASA Astrophysics Data System (ADS)
Sato, H.; Watanabe, K.; Kiyokawa, K.; Kiyanagi, R.; Hara, K. Y.; Kamiyama, T.; Furusaka, M.; Shinohara, T.; Kiyanagi, Y.
The RITS code is a unique and powerful tool for a whole Bragg-edge transmission spectrum fitting analysis. However, it has had two major problems. Therefore, we have proposed methods to overcome these problems. The first issue is the difference in the crystallite size values between the diffraction and the Bragg-edge analyses. We found the reason was a different definition of the crystal structure factor. It affects the crystallite size because the crystallite size is deduced from the primary extinction effect which depends on the crystal structure factor. As a result of algorithm change, crystallite sizes obtained by RITS drastically approached to crystallite sizes obtained by Rietveld analyses of diffraction data; from 155% to 110%. The second issue is correction of the effect of background neutrons scattered from a specimen. Through neutron transport simulation studies, we found that the background components consist of forward Bragg scattering, double backward Bragg scattering, and thermal diffuse scattering. RITS with the background correction function which was developed through the simulation studies could well reconstruct various simulated and experimental transmission spectra, but refined crystalline microstructural parameters were often distorted. Finally, it was recommended to reduce the background by improving experimental conditions.
Electroweak radiative corrections to the top quark decay
NASA Astrophysics Data System (ADS)
Kuruma, Toshiyuki
1993-12-01
The top quark, once produced, should be an important window to the electroweak symmetry breaking sector. We compute electroweak radiative corrections to the decay process t→b+W + in order to extract information on the Higgs sector and to fix the background in searches for a possible new physics contribution. The large Yukawa coupling of the top quark induces a new form factor through vertex corrections and causes discrepancy from the tree-level longitudinal W-boson production fraction, but the effect is of order 1% or less for m H<1 TeV.
Ding, Liang-Hao; Xie, Yang; Park, Seongmi; Xiao, Guanghua; Story, Michael D.
2008-01-01
Despite the tremendous growth of microarray usage in scientific studies, there is a lack of standards for background correction methodologies, especially in single-color microarray platforms. Traditional background subtraction methods often generate negative signals and thus cause large amounts of data loss. Hence, some researchers prefer to avoid background corrections, which typically result in the underestimation of differential expression. Here, by utilizing nonspecific negative control features integrated into Illumina whole genome expression arrays, we have developed a method of model-based background correction for BeadArrays (MBCB). We compared the MBCB with a method adapted from the Affymetrix robust multi-array analysis algorithm and with no background subtraction, using a mouse acute myeloid leukemia (AML) dataset. We demonstrated that differential expression ratios obtained by using the MBCB had the best correlation with quantitative RT–PCR. MBCB also achieved better sensitivity in detecting differentially expressed genes with biological significance. For example, we demonstrated that the differential regulation of Tnfr2, Ikk and NF-kappaB, the death receptor pathway, in the AML samples, could only be detected by using data after MBCB implementation. We conclude that MBCB is a robust background correction method that will lead to more precise determination of gene expression and better biological interpretation of Illumina BeadArray data. PMID:18450815
Franzosi, Diogo Buarque; Vryonidou, Eleni; Zhang, Cen
2017-10-13
Scalar and pseudo-scalar resonances decaying to top quarks are common predictions in several scenarios beyond the standard model (SM) and are extensively searched for by LHC experiments. Challenges on the experimental side require optimising the strategy based on accurate predictions. Firstly, QCD corrections are known to be large both for the SM QCD background and for the pure signal scalar production. Secondly, leading order and approximate next-to-leading order (NLO) calculations indicate that the interference between signal and background is large and drastically changes the lineshape of the signal, from a simple peak to a peak-dip structure. Therefore, a robust predictionmore » of this interference at NLO accuracy in QCD is necessary to ensure that higher-order corrections do not alter the lineshapes. We compute the exact NLO corrections, assuming a point-like coupling between the scalar and the gluons and consistently embedding the calculation in an effective field theory within an automated framework, and present results for a representative set of beyond the SM benchmarks. The results can be further matched to parton shower simulation, providing more realistic predictions. We find that NLO corrections are important and lead to a significant reduction of the uncertainties. We also discuss how our computation can be used to improve the predictions for physics scenarios where the gluon-scalar loop is resolved and the effective approach is less applicable.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franzosi, Diogo Buarque; Vryonidou, Eleni; Zhang, Cen
Scalar and pseudo-scalar resonances decaying to top quarks are common predictions in several scenarios beyond the standard model (SM) and are extensively searched for by LHC experiments. Challenges on the experimental side require optimising the strategy based on accurate predictions. Firstly, QCD corrections are known to be large both for the SM QCD background and for the pure signal scalar production. Secondly, leading order and approximate next-to-leading order (NLO) calculations indicate that the interference between signal and background is large and drastically changes the lineshape of the signal, from a simple peak to a peak-dip structure. Therefore, a robust predictionmore » of this interference at NLO accuracy in QCD is necessary to ensure that higher-order corrections do not alter the lineshapes. We compute the exact NLO corrections, assuming a point-like coupling between the scalar and the gluons and consistently embedding the calculation in an effective field theory within an automated framework, and present results for a representative set of beyond the SM benchmarks. The results can be further matched to parton shower simulation, providing more realistic predictions. We find that NLO corrections are important and lead to a significant reduction of the uncertainties. We also discuss how our computation can be used to improve the predictions for physics scenarios where the gluon-scalar loop is resolved and the effective approach is less applicable.« less
Sea Surface Signature of Tropical Cyclones Using Microwave Remote Sensing
2013-01-01
due to the ionosphere and troposphere, which have to be compensated for, and components due to the galactic and cosmic background radiation those...and corrections for sun glint, galactic and cosmic background radiation, and Stokes effects of the ionosphere. The accuracy of a given retrieval...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) Sea surface signature of tropical cyclones using microwave remote sensing Bumjun Kil
Electroweak Corrections to pp→μ^{+}μ^{-}e^{+}e^{-}+X at the LHC: A Higgs Boson Background Study.
Biedermann, B; Denner, A; Dittmaier, S; Hofer, L; Jäger, B
2016-04-22
The first complete calculation of the next-to-leading-order electroweak corrections to four-lepton production at the LHC is presented, where all off-shell effects of intermediate Z bosons and photons are taken into account. Focusing on the mixed final state μ^{+}μ^{-}e^{+}e^{-}, we study differential cross sections that are particularly interesting for Higgs boson analyses. The electroweak corrections are divided into photonic and purely weak corrections. The former exhibit patterns familiar from similar W- or Z-boson production processes with very large radiative tails near resonances and kinematical shoulders. The weak corrections are of the generic size of 5% and show interesting variations, in particular, a sign change between the regions of resonant Z-pair production and the Higgs signal.
Observation-Corrected Precipitation Estimates in GEOS-5
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; Liu, Qing
2014-01-01
Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.
Yamaguchi, Shotaro; Wagatsuma, Kei; Miwa, Kenta; Ishii, Kenji; Inoue, Kazumasa; Fukushi, Masahiro
2018-03-01
The Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear. Spheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [ 18 F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [ 18 F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200-800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured. The 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot. BPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
High-energy electrons from the muon decay in orbit: Radiative corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szafron, Robert; Czarnecki, Andrzej
2015-12-07
We determine the Ο(α) correction to the energy spectrum of electrons produced in the decay of muons bound in atoms. We focus on the high-energy end of the spectrum that constitutes a background for the muon-electron conversion and will be precisely measured by the upcoming experiments Mu2e and COMET. As a result, the correction suppresses the background by about 20%.
Huang, Kuo-Chen; Chiu, Tsai-Lan
2007-04-01
This study investigated the effects of color combinations for the figure/icon background, icon shape, and line width of the icon border on visual search performance on a liquid crystal display screen. In a circular stimulus array, subjects had to search for a target item which had a diameter of 20 cm and included one target and 19 distractors. Analysis showed that the icon shape significantly affected search performance. The correct response time was significantly shorter for circular icons than for triangular icons, for icon borders with a line width of 3 pixels than for 1 or 2 pixels, and for 2 pixels than for 1 pixel. The color combination also significantly affected the visual search performance: white/yellow, white/blue, black-red, and black/ yellow color combinations for the figure/icon background had shorter correct response times compared to yellow/blue, red/green, yellow/green, and blue/red. However, no effects were found for the line width of the icon border or the icon shape on the error rate. Results have implications for graphics-based design of interfaces, such as for mobile phones, Web sites, and PDAs, as well as complex industrial processes.
NASA Astrophysics Data System (ADS)
Johnson, Jennifer E.; Rella, Chris W.
2017-08-01
Cavity ring-down spectrometers have generally been designed to operate under conditions in which the background gas has a constant composition. However, there are a number of observational and experimental situations of interest in which the background gas has a variable composition. In this study, we examine the effect of background gas composition on a cavity ring-down spectrometer that measures δ18O-H2O and δ2H-H2O values based on the amplitude of water isotopologue absorption features around 7184 cm-1 (L2120-i, Picarro, Inc.). For background mixtures balanced with N2, the apparent δ18O values deviate from true values by -0.50 ± 0.001 ‰ O2 %-1 and -0.57 ± 0.001 ‰ Ar %-1, and apparent δ2H values deviate from true values by 0.26 ± 0.004 ‰ O2 %-1 and 0.42 ± 0.004 ‰ Ar %-1. The artifacts are the result of broadening, narrowing, and shifting of both the target absorption lines and strong neighboring lines. While the background-induced isotopic artifacts can largely be corrected with simple empirical or semi-mechanistic models, neither type of model is capable of completely correcting the isotopic artifacts to within the inherent instrument precision. The development of strategies for dynamically detecting and accommodating background variation in N2, O2, and/or Ar would facilitate the application of cavity ring-down spectrometers to a new class of observations and experiments.
Thermal corrections to the Casimir energy in a general weak gravitational field
NASA Astrophysics Data System (ADS)
Nazari, Borzoo
2016-12-01
We calculate finite temperature corrections to the energy of the Casimir effect of a two conducting parallel plates in a general weak gravitational field. After solving the Klein-Gordon equation inside the apparatus, mode frequencies inside the apparatus are obtained in terms of the parameters of the weak background. Using Matsubara’s approach to quantum statistical mechanics gravity-induced thermal corrections of the energy density are obtained. Well-known weak static and stationary gravitational fields are analyzed and it is found that in the low temperature limit the energy of the system increases compared to that in the zero temperature case.
A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography
Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.
2007-01-01
In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018
Gilmore, Adam Matthew
2014-01-01
Contemporary spectrofluorimeters comprise exciting light sources, excitation and emission monochromators, and detectors that without correction yield data not conforming to an ideal spectral response. The correction of the spectral properties of the exciting and emission light paths first requires calibration of the wavelength and spectral accuracy. The exciting beam path can be corrected up to the sample position using a spectrally corrected reference detection system. The corrected reference response accounts for both the spectral intensity and drift of the exciting light source relative to emission and/or transmission detector responses. The emission detection path must also be corrected for the combined spectral bias of the sample compartment optics, emission monochromator, and detector. There are several crucial issues associated with both excitation and emission correction including the requirement to account for spectral band-pass and resolution, optical band-pass or neutral density filters, and the position and direction of polarizing elements in the light paths. In addition, secondary correction factors are described including (1) subtraction of the solvent's fluorescence background, (2) removal of Rayleigh and Raman scattering lines, as well as (3) correcting for sample concentration-dependent inner-filter effects. The importance of the National Institute of Standards and Technology (NIST) traceable calibration and correction protocols is explained in light of valid intra- and interlaboratory studies and effective spectral qualitative and quantitative analyses including multivariate spectral modeling.
Item and source memory for emotional associates is mediated by different retrieval processes.
Ventura-Bort, Carlos; Dolcos, Florin; Wendt, Julia; Wirkner, Janine; Hamm, Alfons O; Weymar, Mathias
2017-12-12
Recent event-related potential (ERP) data showed that neutral objects encoded in emotional background pictures were better remembered than objects encoded in neutral contexts, when recognition memory was tested one week later. In the present study, we investigated whether this long-term memory advantage for items is also associated with correct memory for contextual source details. Furthermore, we were interested in the possibly dissociable contribution of familiarity and recollection processes (using a Remember/Know procedure). The results revealed that item memory performance was mainly driven by the subjective experience of familiarity, irrespective of whether the objects were previously encoded in emotional or neutral contexts. Correct source memory for the associated background picture, however, was driven by recollection and enhanced when the content was emotional. In ERPs, correctly recognized old objects evoked frontal ERP Old/New effects (300-500ms), irrespective of context category. As in our previous study (Ventura-Bort et al., 2016b), retrieval for objects from emotional contexts was associated with larger parietal Old/New differences (600-800ms), indicating stronger involvement of recollection. Thus, the results suggest a stronger contribution of recollection-based retrieval to item and contextual background source memory for neutral information associated with an emotional event. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interstellar cyanogen and the temperature of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Roth, Katherine C.; Meyer, David M.; Hawkins, Isabel
1993-01-01
We present the results of a recently completed effort to determine the amount of CN rotational excitation in five diffuse interstellar clouds for the purpose of accurately measuring the temperature of the cosmic microwave background radiation (CMBR). In addition, we report a new detection of emission from the strongest hyperfine component of the 2.64 mm CN rotational transition (N = 1-0) in the direction toward HD 21483. We have used this result in combination with existing emission measurements toward our other stars to correct for local excitation effects within diffuse clouds which raise the measured CN rotational temperature above that of the CMBR. After making this correction, we find a weighted mean value of T(CMBR) = 2.729 (+0.023, -0.031) K. This temperature is in excellent agreement with the new COBE measurement of 2.726 +/- 0.010 K (Mather et al., 1993). Our result, which samples the CMBR far from the near-Earth environment, attests to the accuracy of the COBE measurement and reaffirms the cosmic nature of this background radiation. From the observed agreement between our CMBR temperature and the COBE result, we conclude that corrections for local CN excitation based on millimeter emission measurements provide an accurate adjustment to the measured rotational excitation.
McInnes, E F; Scudamore, C L
2014-08-17
Pathological evaluation of lesions caused directly by xenobiotic treatment must always take into account the recognition of background (incidental) findings. Background lesions can be congenital or hereditary, histological variations, changes related to trauma or normal aging and physiologic or hormonal changes. This review focuses on the importance and correct approach to recording of background changes and includes discussion on sources of variability in background changes, the correct use of terminology, the concept of thresholds, historical control data, diagnostic drift, blind reading of slides, scoring and artifacts. The review is illustrated with background lesions in Sprague Dawley and Wistar rats. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yong, Cheng
2018-03-01
The method that direct determination of 18 kinds of trace impurities in the vanadium battery grade vanadyl sulfate by inductively coupled plasma atomic emission spectrometry (ICP-OES) was established, and the detection range includes 0.001% ∼ 0.100% of Fe, Cr, Ni, Cu, Mn, Mo, Pb, As, Co, P, Ti, Zn and 0.005% ∼ 0.100% of K, Na, Ca, Mg, Si, Al. That the influence of the matrix effects, spectral interferences and background continuum superposition in the high concentrations of vanadium ions and sulfate coexistence system had been studied, and then the following conclusions were obtained: the sulfate at this concentration had no effect on the determination, but the matrix effects or continuous background superposition which were generated by high concentration of vanadium ions had negative interference on the determination of potassium and sodium, and it produced a positive interference on the determination of the iron and other impurity elements, so that the impacts of high vanadium matrix were eliminated by the matrix matching and combining synchronous background correction measures. Through the spectral interference test, the paper classification summarized the spectral interferences of vanadium matrix and between the impurity elements, and the analytical lines, the background correction regions and working parameters of the spectrometer were all optimized. The technical performance index of the analysis method is that the background equivalent concentration -0.0003%(Na)~0.0004%(Cu), the detection limit of the element is 0.0001%∼ 0.0003%, RSD<10% when the element content is in the range from 0.001% to 0.007%, RSD< 20% even if the element content is in the range from 0.0001% to 0.001% that is beyond the scope of the method of detection, recoveries is 91.0% ∼ 110.0%.
Quantum corrections for the cubic Galileon in the covariant language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saltas, Ippocratis D.; Vitagliano, Vincenzo, E-mail: isaltas@fc.ul.pt, E-mail: vincenzo.vitagliano@ist.utl.pt
We present for the first time an explicit exposition of quantum corrections within the cubic Galileon theory including the effect of quantum gravity, in a background- and gauge-invariant manner, employing the field-reparametrisation approach of the covariant effective action at 1-loop. We show that the consideration of gravitational effects in combination with the non-linear derivative structure of the theory reveals new interactions at the perturbative level, which manifest themselves as higher-operators in the associated effective action, which' relevance is controlled by appropriate ratios of the cosmological vacuum and the Galileon mass scale. The significance and concept of the covariant approach inmore » this context is discussed, while all calculations are explicitly presented.« less
Conservation of ζ with radiative corrections from heavy field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, Takahiro; Yukawa Institute for Theoretical Physics, Kyoto University,Kyoto, 606-8502; Urakawa, Yuko
2016-06-08
In this paper, we address a possible impact of radiative corrections from a heavy scalar field χ on the curvature perturbation ζ. Integrating out χ, we derive the effective action for ζ, which includes the loop corrections of the heavy field χ. When the mass of χ is much larger than the Hubble scale H, the loop corrections of χ only yield a local contribution to the effective action and hence the effective action simply gives an action for ζ in a single field model, where, as is widely known, ζ is conserved in time after the Hubble crossing time.more » Meanwhile, when the mass of χ is comparable to H, the loop corrections of χ can give a non-local contribution to the effective action. Because of the non-local contribution from χ, in general, ζ may not be conserved, even if the classical background trajectory is determined only by the evolution of the inflaton. In this paper, we derive the condition that ζ is conserved in time in the presence of the radiative corrections from χ. Namely, we show that when the dilatation invariance, which is a part of the diffeomorphism invariance, is preserved at the quantum level, the loop corrections of the massive field χ do not disturb the constant evolution of ζ at super Hubble scales. In this discussion, we show the Ward-Takahashi identity for the dilatation invariance, which yields a consistency relation for the correlation functions of the massive field χ.« less
Three dimensional topography correction applied to magnetotelluric data from Sikkim Himalayas
NASA Astrophysics Data System (ADS)
Kumar, Sushil; Patro, Prasanta K.; Chaudhary, B. S.
2018-06-01
Magnetotelluric (MT) method is one of the powerful tools to investigate the deep crustal image of mountainous regions such as Himalayas. Topographic variations due to irregular surface terrain distort the resistivity curves and hence may not give accurate interpretation of magnetotelluric data. The two-dimensional (2-D) topographic effects in Transverse Magnetic (TM) mode is only galvanic whereas inductive in Transverse Electric (TE) mode, thus TM mode responses is much more important than TE mode responses in 2-D. In three-dimensional (3-D), the topography effect is both galvanic and inductive in each element of impedance tensor and hence the interpretation is complicated. In the present work, we investigate the effects of three-dimensional (3-D) topography for a hill model. This paper presents the impedance tensor correction algorithm to reduce the topographic effects in MT data. The distortion caused by surface topography effectively decreases by using homogeneous background resistivity in impedance correction method. In this study, we analyze the response of ramp, distance from topographic edges, conductive and resistive dykes. The new correction method is applied to the real data from Sikkim Himalayas, which brought out the true nature of the basement in this region.
Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.
Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo
2017-03-03
Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.
NASA Astrophysics Data System (ADS)
Bezur, L.; Marshall, J.; Ottaway, J. M.
A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
Effect of clothing weight on body weight
USDA-ARS?s Scientific Manuscript database
Background: In clinical settings, it is common to measure weight of clothed patients and estimate a correction for the weight of clothing, but we can find no papers in the medical literature regarding the variability in clothing weight with weather, season, and gender. Methods: Fifty adults (35 wom...
NASA Astrophysics Data System (ADS)
Maelger, J.; Reinosa, U.; Serreau, J.
2018-04-01
We extend a previous investigation [U. Reinosa et al., Phys. Rev. D 92, 025021 (2015), 10.1103/PhysRevD.92.025021] of the QCD phase diagram with heavy quarks in the context of background field methods by including the two-loop corrections to the background field effective potential. The nonperturbative dynamics in the pure-gauge sector is modeled by a phenomenological gluon mass term in the Landau-DeWitt gauge-fixed action, which results in an improved perturbative expansion. We investigate the phase diagram at nonzero temperature and (real or imaginary) chemical potential. Two-loop corrections yield an improved agreement with lattice data as compared to the leading-order results. We also compare with the results of nonperturbative continuum approaches. We further study the equation of state as well as the thermodynamic stability of the system at two-loop order. Finally, using simple thermodynamic arguments, we show that the behavior of the Polyakov loops as functions of the chemical potential complies with their interpretation in terms of quark and antiquark free energies.
Erny, Guillaume L; Acunha, Tanize; Simó, Carolina; Cifuentes, Alejandro; Alves, Arminda
2017-04-07
Separation techniques hyphenated with high-resolution mass spectrometry have been a true revolution in analytical separation techniques. Such instruments not only provide unmatched resolution, but they also allow measuring the peaks accurate masses that permit identifying monoisotopic formulae. However, data files can be large, with a major contribution from background noise and background ions. Such unnecessary contribution to the overall signal can hide important features as well as decrease the accuracy of the centroid determination, especially with minor features. Thus, noise and baseline correction can be a valuable pre-processing step. The methodology that is described here, unlike any other approach, is used to correct the original dataset with the MS scans recorded as profiles spectrum. Using urine metabolic studies as examples, we demonstrate that this thorough correction reduces the data complexity by more than 90%. Such correction not only permits an improved visualisation of secondary peaks in the chromatographic domain, but it also facilitates the complete assignment of each MS scan which is invaluable to detect possible comigration/coeluting species. Copyright © 2017 Elsevier B.V. All rights reserved.
Peculiar velocity measurement in a clumpy universe
NASA Astrophysics Data System (ADS)
Habibi, Farhang; Baghram, Shant; Tavasoli, Saeed
Aims: In this work, we address the issue of peculiar velocity measurement in a perturbed Friedmann universe using the deviations from measured luminosity distances of standard candles from background FRW universe. We want to show and quantify the statement that in intermediate redshifts (0.5 < z < 2), deviations from the background FRW model are not uniquely governed by peculiar velocities. Luminosity distances are modified by gravitational lensing. We also want to indicate the importance of relativistic calculations for peculiar velocity measurement at all redshifts. Methods: For this task, we discuss the relativistic correction on luminosity distance and redshift measurement and show the contribution of each of the corrections as lensing term, peculiar velocity of the source and Sachs-Wolfe effect. Then, we use the SNe Ia sample of Union 2, to investigate the relativistic effects, we consider. Results: We show that, using the conventional peculiar velocity method, that ignores the lensing effect, will result in an overestimate of the measured peculiar velocities at intermediate redshifts. Here, we quantify this effect. We show that at low redshifts the lensing effect is negligible compare to the effect of peculiar velocity. From the observational point of view, we show that the uncertainties on luminosity of the present SNe Ia data prevent us from precise measuring the peculiar velocities even at low redshifts (z < 0.2).
Correction of Microplate Data from High-Throughput Screening.
Wang, Yuhong; Huang, Ruili
2016-01-01
High-throughput screening (HTS) makes it possible to collect cellular response data from a large number of cell lines and small molecules in a timely and cost-effective manner. The errors and noises in the microplate-formatted data from HTS have unique characteristics, and they can be generally grouped into three categories: run-wise (temporal, multiple plates), plate-wise (background pattern, single plate), and well-wise (single well). In this chapter, we describe a systematic solution for identifying and correcting such errors and noises, mainly basing on pattern recognition and digital signal processing technologies.
[Evaluation of Sugar Content of Huanghua Pear on Trees by Visible/Near Infrared Spectroscopy].
Liu, Hui-jun; Ying, Yi-bin
2015-11-01
A method of ambient light correction was proposed to evaluate the sugar content of Huanghua pears on tree by visible/near infrared diffuse reflectance spectroscopy (Vis/NIRS). Due to strong interference of ambient light, it was difficult to collect the efficient spectral of pears on tree. In the field, covering the fruits with a bag blocking ambient light can get better results, but the efficiency is fairly low, the instrument corrections of dark and reference spectra may help to reduce the error of the model, however, the interference of the ambient light cannot be eliminated effectively. In order to reduce the effect of ambient light, a shutter was attached to the front of probe. When opening shutter, the spot spectrum were obtained, on which instrument light and ambient light acted at the same time. While closing shutter, background spectra were obtained, on which only ambient light acted, then the ambient light spectra was subtracted from spot spectra. Prediction models were built using data on tree (before and after ambient light correction) and after harvesting by partial least square (PLS). The results of the correlation coefficient (R) are 0.1, 0.69, 0.924; the root mean square error of prediction (SEP) are 0. 89°Brix, 0.42°Brix, 0.27°Brix; ratio of standard deviation (SD) to SEP (RPD) are 0.79, 1.69, 2.58, respectively. The results indicate that, method of background correction used in the experiment can reduce the effect of ambient lighting on spectral acquisition of Huanghua pears in field, efficiently. This method can be used to collect the visible/near infrared spectrum of fruits in field, and may give full play to visible/near-infrared spectroscopy in preharvest management and maturity testing of fruits in the field.
NASA Technical Reports Server (NTRS)
Nikityuk, B. A.; Kogan, B. I.; Yermolyev, V. A.; Tindare, L. V.
1980-01-01
Tests were conducted on 100 sexually immature inbred August and Wistar male rats in order to determine the effects hypokinesia, physical load and phenamine on the liver. Weight and linear dimension fell in hypokinesia; total serum protein lowered and aldolase and cholesterol and beta-lipoprotein levels rose. Blood sugar content rose and liver glycogen fell. Interlinear differences of these indices are found. Rehabilitated physical loading against hypokinesia background diminished and at times completely prevented its negative effect. Extent of correction depended on animal species. Evidence of genotypical conditionality of organism adaptation to physical load in hypokinesia was found.
Bryan, Sean A; Montroy, Thomas E; Ruhl, John E
2010-11-10
We derive an analytic formula using the Mueller matrix formalism that parameterizes the nonidealities of a half-wave plate (HWP) made from dielectric antireflection-coated birefringent slabs. This model accounts for frequency-dependent effects at normal incidence, including effects driven by the reflections at dielectric boundaries. The model also may be used to guide the characterization of an instrument that uses a HWP. We discuss the coupling of a HWP to different source spectra, and the potential impact of that effect on foreground removal for the SPIDER cosmic microwave background experiment. We also describe a way to use this model in a mapmaking algorithm that fully corrects for HWP nonidealities.
NASA Astrophysics Data System (ADS)
Kruger, Pamela C.; Parsons, Patrick J.
2007-03-01
Excessive exposure to aluminum (Al) can produce serious health consequences in people with impaired renal function, especially those undergoing hemodialysis. Al can accumulate in the brain and in bone, causing dialysis-related encephalopathy and renal osteodystrophy. Thus, dialysis patients are routinely monitored for Al overload, through measurement of their serum Al. Electrothermal atomic absorption spectrometry (ETAAS) is widely used for serum Al determination. Here, we assess the analytical performances of three ETAAS instruments, equipped with different background correction systems and heating arrangements, for the determination of serum Al. Specifically, we compare (1) a Perkin Elmer (PE) Model 3110 AAS, equipped with a longitudinally (end) heated graphite atomizer (HGA) and continuum-source (deuterium) background correction, with (2) a PE Model 4100ZL AAS equipped with a transversely heated graphite atomizer (THGA) and longitudinal Zeeman background correction, and (3) a PE Model Z5100 AAS equipped with a HGA and transverse Zeeman background correction. We were able to transfer the method for serum Al previously established for the Z5100 and 4100ZL instruments to the 3110, with only minor modifications. As with the Zeeman instruments, matrix-matched calibration was not required for the 3110 and, thus, aqueous calibration standards were used. However, the 309.3-nm line was chosen for analysis on the 3110 due to failure of the continuum background correction system at the 396.2-nm line. A small, seemingly insignificant overcorrection error was observed in the background channel on the 3110 instrument at the 309.3-nm line. On the 4100ZL, signal oscillation was observed in the atomization profile. The sensitivity, or characteristic mass ( m0), for Al at the 309.3-nm line on the 3110 AAS was found to be 12.1 ± 0.6 pg, compared to 16.1 ± 0.7 pg for the Z5100, and 23.3 ± 1.3 pg for the 4100ZL at the 396.2-nm line. However, the instrumental detection limits (3 SD) for Al were very similar: 3.0, 3.2, and 4.1 μg L - 1 for the Z5100, 4100ZL, and 3110, respectively. Serum Al method detection limits (3 SD) were 9.8, 6.9, and 7.3 μg L - 1 , respectively. Accuracy was assessed using archived serum (and plasma) reference materials from various external quality assessment schemes (EQAS). Values found with all three instruments were within the acceptable EQAS ranges. The data indicate that relatively modest ETAAS instrumentation equipped with continuum background correction is adequate for routine serum Al monitoring.
Correlates of Condom Use among Male High School Students in Nairobi, Kenya
ERIC Educational Resources Information Center
Kabiru, Caroline W.; Orpinas, Pamela
2009-01-01
Background: Correct and consistent condom use is an effective strategy to reduce the risk of sexually transmitted infections (STIs). This study examines sociodemographic, behavioral, and psychosocial characteristics of 3 groups of adolescent males: consistent, sporadic, and non-condom users. Methods: The sample consisted of 931 sexually…
Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...
The Shock Pulse Index and Its Application in the Fault Diagnosis of Rolling Element Bearings
Sun, Peng; Liao, Yuhe; Lin, Jin
2017-01-01
The properties of the time domain parameters of vibration signals have been extensively studied for the fault diagnosis of rolling element bearings (REBs). Parameters like kurtosis and Envelope Harmonic-to-Noise Ratio are the most widely applied in this field and some important progress has been made. However, since only one-sided information is contained in these parameters, problems still exist in practice when the signals collected are of complicated structure and/or contaminated by strong background noises. A new parameter, named Shock Pulse Index (SPI), is proposed in this paper. It integrates the mutual advantages of both the parameters mentioned above and can help effectively identify fault-related impulse components under conditions of interference of strong background noises, unrelated harmonic components and random impulses. The SPI optimizes the parameters of Maximum Correlated Kurtosis Deconvolution (MCKD), which is used to filter the signals under consideration. Finally, the transient information of interest contained in the filtered signal can be highlighted through demodulation with the Teager Energy Operator (TEO). Fault-related impulse components can therefore be extracted accurately. Simulations show the SPI can correctly indicate the fault impulses under the influence of strong background noises, other harmonic components and aperiodic impulse and experiment analyses verify the effectiveness and correctness of the proposed method. PMID:28282883
Holographic corrections to the Veneziano amplitude
NASA Astrophysics Data System (ADS)
Armoni, Adi; Ireson, Edwin
2017-08-01
We propose a holographic computation of the 2 → 2 meson scattering in a curved string background, dual to a QCD-like theory. We recover the Veneziano amplitude and compute a perturbative correction due to the background curvature. The result implies a small deviation from a linear trajectory, which is a requirement of the UV regime of QCD.
Elementary review of electron microprobe techniques and correction requirements
NASA Technical Reports Server (NTRS)
Hart, R. K.
1968-01-01
Report contains requirements for correction of instrumented data on the chemical composition of a specimen, obtained by electron microprobe analysis. A condensed review of electron microprobe techniques is presented, including background material for obtaining X ray intensity data corrections and absorption, atomic number, and fluorescence corrections.
Radiated BPF sound measurement of centrifugal compressor
NASA Astrophysics Data System (ADS)
Ohuchida, S.; Tanaka, K.
2013-12-01
A technique to measure radiated BPF sound from an automotive turbocharger compressor impeller is proposed in this paper. Where there are high-level background noises in the measurement environment, it is difficult to discriminate the target component from the background. Since the effort of measuring BPF sound was taken in a room with such condition in this study, no discrete BPF peak was initially found on the sound spectrum. Taking its directionality into consideration, a microphone covered with a parabolic cone was selected and using this technique, the discrete peak of BPF was clearly observed. Since the level of measured sound was amplified due to the area-integration effect, correction was needed to obtain the real level. To do so, sound measurements with and without a parabolic cone were conducted for the fixed source and their level differences were used as correction factors. Consideration is given to the sound propagation mechanism utilizing measured BPF as well as the result of a simple model experiment. The present method is generally applicable to sound measurements conducted with a high level of background noise.
Time-of-day Corrections to Aircraft Noise Metrics
NASA Technical Reports Server (NTRS)
Clevenson, S. (Editor); Shepherd, W. T. (Editor)
1980-01-01
The historical and background aspects of time-of-day corrections as well as the evidence supporting these corrections are discussed. Health, welfare, and economic impacts, needs a criteria, and government policy and regulation, are also reported.
NASA Astrophysics Data System (ADS)
Kowalewska, Zofia; Laskowska, Hanna; Gzylewski, Michał
2017-06-01
High-resolution continuum source and line source flame atomic absorption spectrometry (HR-CS FAAS and LS FAAS, respectively) were applied for Pb determination in unleaded aviation or automotive gasoline that was dissolved in methyl-isobutyl ketone. When using HR-CS FAAS, a structured background (BG) was registered in the vicinity of both the 217.001 nm and 283.306 nm Pb lines. In the first case, the BG, which could be attributed to absorption by the OH molecule, directly overlaps with the 217 nm line, but it is of relatively low intensity. For the 283 nm line, the structured BG occurs due to uncompensated absorption by OH molecules present in the flame. BG lines of relatively high intensity are situated at a large distance from the 283 nm line, which enables accurate analysis, not only when using simple variants of HR-CS FAAS but also for LS FAAS with a bandpass of 0.1 nm. The lines of the structured spectrum at 283 nm can have ;absorption; (maxima) or ;emission; (minima) character. The intensity of the OH spectra can significantly depend on the flame character and composition of the investigated organic solution. The best detection limit for the analytical procedure, which was 0.01 mg L- 1 for Pb in the investigated solution, could be achieved using HR-CS FAAS with the 283 nm Pb line, 5 pixels for the analyte line measurement and iterative background correction (IBC). In this case, least squares background correction (LSBC) is not recommended. However, LSBC (available as the ;permanent structures; option) would be recommended when using the 217 nm Pb line. In LS FAAS, an additional phenomenon related to the nature of the organic matrix (for example, isooctane or toluene) can play an important role. The effect is of continuous character and probably due to the simultaneous efficient correction of the continuous background (IBC) it is not observed in HR-CS FAAS. The fact that the effect does not depend on the flame character indicates that it is not radiation scattering. For LS FAAS, the determination of Pb using the 283 nm line, a 0.1 nm bandpass and a fuel lean flame is strongly recommended. The analysis of certified reference materials, recovery studies and the analysis of real samples with low Pb content supported the satisfactory accuracy of Pb determination in automotive or aviation gasoline when the recommended analytical variants are applied. The studies in this work shed new light on spectral phenomena in air-acetylene flames. The structured background due to absorption by the OH molecules must be taken into account during Pb determination in other materials as well as in some other elemental determinations, especially at low absorbance levels. The usefulness of HR-CS FAAS for revealing and investigating a structured background was demonstrated. HR-CS FAAS does not reveal fully corrected spectral effects with a continuous character, which can be found in LS FAAS.
Inflight characterization and correction of Planck/HFI analog to digital converter nonlinearity
NASA Astrophysics Data System (ADS)
Sauvé, A.; Couchot, F.; Patanchon, G.; Montier, L.
2016-07-01
The Planck Satellite launched in 2009 was targeted to observe the anisotropies of the Cosmic Microwave Back-ground (CMB) to an unprecedented sensitivity. While the Analog to Digital Converter of the HFI (High Frequency Instrument) readout electronics had not been properly characterized on ground, it has been shown to add a systematic nonlinearity effect up to 2% of the cosmological signal. This was a limiting factor for CMB science at large angular scale. We will present the in-flight analysis and method used to characterize and correct this effect down to 0.05% level. We also discuss how to avoid this kind of complex issue for future missions.
NASA Astrophysics Data System (ADS)
He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.
2018-02-01
A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.
Research on correction algorithm of laser positioning system based on four quadrant detector
NASA Astrophysics Data System (ADS)
Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia
2018-02-01
This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.
Compensating for magnetic field inhomogeneity in multigradient-echo-based MR thermometry.
Simonis, Frank F J; Petersen, Esben T; Bartels, Lambertus W; Lagendijk, Jan J W; van den Berg, Cornelis A T
2015-03-01
MR thermometry (MRT) is a noninvasive method for measuring temperature that can potentially be used for radio frequency (RF) safety monitoring. This application requires measuring absolute temperature. In this study, a multigradient-echo (mGE) MRT sequence was used for that purpose. A drawback of this sequence, however, is that its accuracy is affected by background gradients. In this article, we present a method to minimize this effect and to improve absolute temperature measurements using MRI. By determining background gradients using a B0 map or by combining data acquired with two opposing readout directions, the error can be removed in a homogenous phantom, thus improving temperature maps. All scans were performed on a 3T system using ethylene glycol-filled phantoms. Background gradients were varied, and one phantom was uniformly heated to validate both compensation approaches. Independent temperature recordings were made with optical probes. Errors correlated closely to the background gradients in all experiments. Temperature distributions showed a much smaller standard deviation when the corrections were applied (0.21°C vs. 0.45°C) and correlated well with thermo-optical probes. The corrections offer the possibility to measure RF heating in phantoms more precisely. This allows mGE MRT to become a valuable tool in RF safety assessment. © 2014 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Lousada, M.; Jesus, Luis M. T.; Hall, A.; Joffe, V.
2014-01-01
Background: The effectiveness of two treatment approaches (phonological therapy and articulation therapy) for treatment of 14 children, aged 4;0-6;7 years, with phonologically based speech-sound disorder (SSD) has been previously analysed with severity outcome measures (percentage of consonants correct score, percentage occurrence of phonological…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
... the Halibut Act. The IFQ Program's principal management measures, with certain exceptions, were: to... Conservation and Management Act (Magnuson-Stevens Act), and other applicable law. DATES: Effective April 20...: Background The IFQ Program, a limited access management system for the fixed gear Pacific halibut...
Higher derivatives in Type II and M-theory on Calabi-Yau threefolds
NASA Astrophysics Data System (ADS)
Grimm, Thomas W.; Mayer, Kilian; Weissenbacher, Matthias
2018-02-01
The four- and five-dimensional effective actions of Calabi-Yau threefold compactifications are derived with a focus on terms involving up to four space-time derivatives. The starting points for these reductions are the ten- and eleven-dimensional supergravity actions supplemented with the known eight-derivative corrections that have been inferred from Type II string amplitudes. The corrected background solutions are determined and the fluctuations of the Kähler structure of the compact space and the form-field back-ground are discussed. It is concluded that the two-derivative effective actions for these fluctuations only takes the expected supergravity form if certain additional ten- and eleven-dimensional higher-derivative terms for the form-fields are included. The main results on the four-derivative terms include a detailed treatment of higher-derivative gravity coupled to Kähler structure deformations. This is supplemented by a derivation of the vector sector in reductions to five dimensions. While the general result is only given as an expansion in the fluctuations, a complete treatment of the one-Kähler modulus case is presented for both Type II theories and M-theory.
Compton suppression gamma-counting: The effect of count rate
Millard, H.T.
1984-01-01
Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.
ERIC Educational Resources Information Center
California State Board of Corrections, Sacramento.
This package consists of an information booklet for job candidates preparing to take California's Corrections Officer Examination and a user's manual intended for those who will administer the examination. The candidate information booklet provides background information about the development of the Corrections Officer Examination, describes its…
Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity
ERIC Educational Resources Information Center
Simmons-Mackie, Nina; Damico, Jack S.
2008-01-01
Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…
Halin, Niklas
2016-01-01
The purpose of this study was to investigate the distractive effects of background speech, aircraft noise and road traffic noise on text memory and particularly to examine if displaying the texts in a hard-to-read font can shield against the detrimental effects of these types of background sounds. This issue was addressed in an experiment where 56 students read shorter texts about different classes of fictitious creatures (i.e., animals, fishes, birds, and dinosaurs) against a background of the aforementioned background sounds respectively and silence. For half of the participants the texts were displayed in an easy-to-read font (i.e., Times New Roman) and for the other half in a hard-to-read font (i.e., Haettenschweiler). The dependent measure was the proportion correct answers on the multiple-choice tests that followed each sound condition. Participants’ performance in the easy-to-read font condition was significantly impaired by all three background sound conditions compared to silence. In contrast, there were no effects of the three background sound conditions compared to silence in the hard-to-read font condition. These results suggest that an increase in task demand—by displaying the text in a hard-to-read font—shields against various types of distracting background sounds by promoting a more steadfast locus-of-attention and by reducing the processing of background sound. PMID:27555834
Independent effects of colour on object identification and memory.
Lloyd-Jones, Toby J; Nakabayashi, Kazuyo
2009-02-01
We examined the effects of colour on object identification and memory using a study-test priming procedure with a coloured-object decision task at test (i.e., deciding whether an object is correctly coloured). Objects were selected to have a single associated colour and were either correctly or incorrectly coloured. In addition, object shape and colour were either spatially integrated (i.e., colour fell on the object surface) or spatially separated (i.e., colour formed the background to the object). Transforming the colour of an object from study to test (e.g., from a yellow banana to a purple banana) reduced priming of response times, as compared to when the object was untransformed. This utilization of colour information in object memory was not contingent upon colour falling on the object surface or whether the resulting configuration was of a correctly or incorrectly coloured object. In addition, we observed independent effects of colour on response times, whereby coloured-object decisions were more efficient for correctly than for incorrectly coloured objects but only when colour fell on the object surface. These findings provide evidence for two distinct mechanisms of shape-colour binding in object processing.
NASA Technical Reports Server (NTRS)
Spanner, Michael A.; Pierce, Lars L.; Running, Steven W.; Peterson, David L.
1990-01-01
Consideration is given to the effects of canopy closure, understory vegetation, and background reflectance on the relationship between Landsat TM data and the leaf area index (LAI) of temperate coniferous forests in the western U.S. A methodology for correcting TM data for atmospheric conditions and sun-surface-sensor geometry is discussed. Strong inverse curvilinear relationships were found between coniferous forest LAI and TM bands 3 and 5. It is suggested that these inverse relationships are due to increased reflectance of understory vegetation and background in open stands of lower LAI and decreased reflectance of the overstory in closed canopy stands with higher LAI.
Relativistic electron plasma oscillations in an inhomogeneous ion background
NASA Astrophysics Data System (ADS)
Karmakar, Mithun; Maity, Chandan; Chakrabarti, Nikhil
2018-06-01
The combined effect of relativistic electron mass variation and background ion inhomogeneity on the phase mixing process of large amplitude electron oscillations in cold plasmas have been analyzed by using Lagrangian coordinates. An inhomogeneity in the ion density is assumed to be time-independent but spatially periodic, and a periodic perturbation in the electron density is considered as well. An approximate space-time dependent solution is obtained in the weakly-relativistic limit by employing the Bogolyubov and Krylov method of averaging. It is shown that the phase mixing process of relativistically corrected electron oscillations is strongly influenced by the presence of a pre-existing ion density ripple in the plasma background.
Parallel Low-Loss Measurement of Multiple Atomic Qubits
NASA Astrophysics Data System (ADS)
Kwon, Minho; Ebert, Matthew F.; Walker, Thad G.; Saffman, M.
2017-11-01
We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is <2 % and the initial hyperfine state is preserved with >98 % probability.
Szmacinski, Henryk; Toshchakov, Vladimir; Lakowicz, Joseph R.
2014-01-01
Abstract. Protein-protein interactions in cells are often studied using fluorescence resonance energy transfer (FRET) phenomenon by fluorescence lifetime imaging microscopy (FLIM). Here, we demonstrate approaches to the quantitative analysis of FRET in cell population in a case complicated by a highly heterogeneous donor expression, multiexponential donor lifetime, large contribution of cell autofluorescence, and significant presence of unquenched donor molecules that do not interact with the acceptor due to low affinity of donor-acceptor binding. We applied a multifrequency phasor plot to visualize FRET FLIM data, developed a method for lifetime background correction, and performed a detailed time-resolved analysis using a biexponential model. These approaches were applied to study the interaction between the Toll Interleukin-1 receptor (TIR) domain of Toll-like receptor 4 (TLR4) and the decoy peptide 4BB. TLR4 was fused to Cerulean fluorescent protein (Cer) and 4BB peptide was labeled with Bodipy TMRX (BTX). Phasor displays for multifrequency FLIM data are presented. The analytical procedure for lifetime background correction is described and the effect of correction on FLIM data is demonstrated. The absolute FRET efficiency was determined based on the phasor plot display and multifrequency FLIM data analysis. The binding affinity between TLR4-Cer (donor) and decoy peptide 4BB-BTX (acceptor) was estimated in a heterogeneous HeLa cell population. PMID:24770662
2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias
A semi-analytic dynamical friction model for cored galaxies
NASA Astrophysics Data System (ADS)
Petts, J. A.; Read, J. I.; Gualandris, A.
2016-11-01
We present a dynamical friction model based on Chandrasekhar's formula that reproduces the fast inspiral and stalling experienced by satellites orbiting galaxies with a large constant density core. We show that the fast inspiral phase does not owe to resonance. Rather, it owes to the background velocity distribution function for the constant density core being dissimilar from the usually assumed Maxwellian distribution. Using the correct background velocity distribution function and our semi-analytic model from previous work, we are able to correctly reproduce the infall rate in both cored and cusped potentials. However, in the case of large cores, our model is no longer able to correctly capture core-stalling. We show that this stalling owes to the tidal radius of the satellite approaching the size of the core. By switching off dynamical friction when rt(r) = r (where rt is the tidal radius at the satellite's position), we arrive at a model which reproduces the N-body results remarkably well. Since the tidal radius can be very large for constant density background distributions, our model recovers the result that stalling can occur for Ms/Menc ≪ 1, where Ms and Menc are the mass of the satellite and the enclosed galaxy mass, respectively. Finally, we include the contribution to dynamical friction that comes from stars moving faster than the satellite. This next-to-leading order effect becomes the dominant driver of inspiral near the core region, prior to stalling.
Blindness to background: an inbuilt bias for visual objects.
O'Hanlon, Catherine G; Read, Jenny C A
2017-09-01
Sixty-eight 2- to 12-year-olds and 30 adults were shown colorful displays on a touchscreen monitor and trained to point to the location of a named color. Participants located targets near-perfectly when presented with four abutting colored patches. When presented with three colored patches on a colored background, toddlers failed to locate targets in the background. Eye tracking demonstrated that the effect was partially mediated by a tendency not to fixate the background. However, the effect was abolished when the targets were named as nouns, whilst the change to nouns had little impact on eye movement patterns. Our results imply a powerful, inbuilt tendency to attend to objects, which may slow the development of color concepts and acquisition of color words. A video abstract of this article can be viewed at: https://youtu.be/TKO1BPeAiOI. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.
Cloud and aerosol optical depths
NASA Technical Reports Server (NTRS)
Pueschel, R. F.; Russell, P. B.; Ackerman, Thomas P.; Colburn, D. C.; Wrigley, R. C.; Spanner, M. A.; Livingston, J. M.
1988-01-01
An airborne Sun photometer was used to measure optical depths in clear atmospheres between the appearances of broken stratus clouds, and the optical depths in the vicinity of smokes. Results show that (human) activities can alter the chemical and optical properties of background atmospheres to affect their spectral optical depths. Effects of water vapor adsorption on aerosol optical depths are apparent, based on data of the water vapor absorption band centered around 940 nm. Smoke optical depths show increases above the background atmosphere by up to two orders of magnitude. When the total optical depths measured through clouds were corrected for molecular scattering and gaseous absorption by subtracting the total optical depths measured through the background atmosphere, the resultant values are lower than those of the background aerosol at short wavelengths. The spectral dependence of these cloud optical depths is neutral, however, in contrast to that of the background aerosol or the molecular atmosphere.
Saur, Sigrun; Frengen, Jomar
2008-07-01
Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.
NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; Manning, David; Donovan, Tim; Dix, Alan
2010-02-01
Aim: To investigate the impact on visual sampling strategy and pulmonary nodule recognition of image-based properties of background locations in dwelled regions where the first overt decision was made. . Background: Recent studies in mammography show that the first overt decision (TP or FP) has an influence on further image reading including the correctness of the following decisions. Furthermore, the correlation between the spatial frequency properties of the local background following decision sites and the first decision correctness has been reported. Methods: Subjects with different radiological experience were eye tracked during detection of pulmonary nodules from PA chest radiographs. Number of outcomes and the overall quality of performance are analysed in terms of the cases where correct or incorrect decisions were made. JAFROC methodology is applied. The spatial frequency properties of selected local backgrounds related to a certain decisions were studied. ANOVA was used to compare the logarithmic values of energy carried by non redundant stationary wavelet packet coefficients. Results: A strong correlation has been found between the number of TP as a first decision and the JAFROC score (r = 0.74). The number of FP as a first decision was found negatively correlated with JAFROC (r = -0.75). Moreover, the differential spatial frequency profiles outcomes depend on the first choice correctness.
Wagner, John H; Miskelly, Gordon M
2003-05-01
The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.
Lead in Drinking Water in Schools and Non-Residential Buildings.
ERIC Educational Resources Information Center
Environmental Protection Agency, Washington, DC.
This manual demonstrates how drinking water in schools and non-residential buildings can be tested for lead and how contamination problems can be corrected when found. The manual also provides background information concerning the sources and health effects of lead, how lead gets into drinking water, how lead in drinking water is regulated, and…
Micro-Pulse Lidar Signals: Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Starr, David OC. (Technical Monitor)
2002-01-01
Micro-pulse lidar (MPL) systems are small, autonomous, eye-safe lidars used for continuous observations of the vertical distribution of cloud and aerosol layers. Since the construction of the first MPL in 1993, procedures have been developed to correct for various instrument effects present in MPL signals. The primary instrument effects include afterpulse, laser-detector cross-talk, and overlap, poor near-range (less than 6 km) focusing. The accurate correction of both afterpulse and overlap effects are required to study both clouds and aerosols. Furthermore, the outgoing energy of the laser pulses and the statistical uncertainty of the MPL detector must also be correctly determined in order to assess the accuracy of MPL observations. The uncertainties associated with the afterpulse, overlap, pulse energy, detector noise, and all remaining quantities affecting measured MPL signals, are determined in this study. The uncertainties are propagated through the entire MPL correction process to give a net uncertainty on the final corrected MPL signal. The results show that in the near range, the overlap uncertainty dominates. At altitudes above the overlap region, the dominant source of uncertainty is caused by uncertainty in the pulse energy. However, if the laser energy is low, then during mid-day, high solar background levels can significantly reduce the signal-to-noise of the detector. In such a case, the statistical uncertainty of the detector count rate becomes dominant at altitudes above the overlap region.
A background correction algorithm for Van Allen Probes MagEIS electron flux measurements
Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...
2015-07-14
We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less
Akdeniz, Ceren; Tost, Heike; Streit, Fabian; Haddad, Leila; Wüst, Stefan; Schäfer, Axel; Schneider, Michael; Rietschel, Marcella; Kirsch, Peter; Meyer-Lindenberg, Andreas
2014-06-01
Relative risk for the brain disorder schizophrenia is more than doubled in ethnic minorities, an effect that is evident across countries and linked to socially relevant cues such as skin color, making ethnic minority status a well-established social environmental risk factor. Pathoepidemiological models propose a role for chronic social stress and perceived discrimination for mental health risk in ethnic minorities, but the neurobiology is unexplored. To study neural social stress processing, using functional magnetic resonance imaging, and associations with perceived discrimination in ethnic minority individuals. Cross-sectional design in a university setting using 3 validated paradigms to challenge neural social stress processing and, to probe for specificity, emotional and cognitive brain functions. Healthy participants included those with German lineage (n = 40) and those of ethnic minority (n = 40) from different ethnic backgrounds matched for sociodemographic, psychological, and task performance characteristics. Control comparisons examined stress processing with matched ethnic background of investigators (23 Turkish vs 23 German participants) and basic emotional and cognitive tasks (24 Turkish vs 24 German participants). Blood oxygenation level-dependent response, functional connectivity, and psychological and physiological measures. There were significant increases in heart rate (P < .001), subjective emotional response (self-related emotions, P < .001; subjective anxiety, P = .006), and salivary cortisol level (P = .004) during functional magnetic resonance imaging stress induction. Ethnic minority individuals had significantly higher perceived chronic stress levels (P = .02) as well as increased activation (family-wise error-corrected [FWE] P = .005, region of interest corrected) and increased functional connectivity (PFWE = .01, region of interest corrected) of perigenual anterior cingulate cortex (ACC). The effects were specific to stress and not explained by a social distance effect. Ethnic minority individuals had significant correlations between perceived group discrimination and activation in perigenual ACC (PFWE = .001, region of interest corrected) and ventral striatum (PFWE = .02, whole brain corrected) and mediation of the relationship between perceived discrimination and perigenual ACC-dorsal ACC connectivity by chronic stress (P < .05). Epidemiologists proposed a causal role of social-evaluative stress, but the neural processes that could mediate this susceptibility effect were unknown. Our data demonstrate the potential of investigating associations from epidemiology with neuroimaging, suggest brain effects of social marginalization, and highlight a neural system in which environmental and genetic risk factors for mental illness may converge.
Publisher Correction: Cluster richness-mass calibration with cosmic microwave background lensing
NASA Astrophysics Data System (ADS)
Geach, James E.; Peacock, John A.
2018-03-01
Owing to a technical error, the `Additional information' section of the originally published PDF version of this Letter incorrectly gave J.A.P. as the corresponding author; it should have read J.E.G. This has now been corrected. The HTML version is correct.
NASA Astrophysics Data System (ADS)
Nara, H.; Tanimoto, H.; Tohjima, Y.; Mukai, H.; Nojiri, Y.; Katsumata, K.; Rella, C. W.
2012-11-01
We examined potential interferences from water vapor and atmospheric background gases (N2, O2, and Ar), and biases by isotopologues of target species, on accurate measurement of atmospheric CO2 and CH4 by means of wavelength-scanned cavity ring-down spectroscopy (WS-CRDS). Changes of the background gas mole fractions in the sample air substantially impacted the CO2 and CH4 measurements: variation of CO2 and CH4 due to relative increase of each background gas increased as Ar < O2 < N2, suggesting similar relation for the pressure-broadening effects (PBEs) among the background gas. The pressure-broadening coefficients due to variations in O2 and Ar for CO2 and CH4 are empirically determined from these experimental results. Calculated PBEs using the pressure-broadening coefficients are linearly correlated with the differences between the mole fractions of O2 and Ar and their ambient abundances. Although the PBEs calculation showed that impact of natural variation of O2 is negligible on the CO2 and CH4 measurements, significant bias was inferred for the measurement of synthetic standard gases. For gas standards balanced with purified air, the PBEs were estimated to be marginal (up to 0.05 ppm for CO2 and 0.01 ppb for CH4) although the PBEs were substantial (up to 0.87 ppm for CO2 and 1.4 ppb for CH4) for standards balanced with synthetic air. For isotopic biases on CO2 measurements, we compared experimental results and theoretical calculations, which showed excellent agreement within their uncertainty. We derived instrument-specific water correction functions empirically for three WS-CRDS instruments (Picarro EnviroSense 3000i, G-1301, and G-2301), and evaluated the transferability of the water correction function from G-1301 among these instruments. Although the transferability was not proven, no significant difference was found in the water vapor correction function for the investigated WS-CRDS instruments as well as the instruments reported in the past studies within the typical analytical precision at sufficiently low water concentrations (<0.7% for CO2 and <0.6% for CH4). For accurate measurements of CO2 and CH4 in ambient air, we concluded that WS-CRDS measurements should be performed under complete dehumidification of air samples, or moderate dehumidification followed by application of a water vapor correction function, along with calibration by natural air-based standard gases or purified air-balanced synthetic standard gases with the isotopic correction.
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
Kong, Yong-Ku; Lee, Inseok; Jung, Myung-Chul; Song, Young-Woong
2011-05-01
This study evaluated the effects of age (20s and 60s), viewing distance (50 cm, 200 cm), display type (paper, monitor), font type (Gothic, Ming), colour contrast (black letters on white background, white letters on black background) and number of syllables (one, two) on the legibility of Korean characters by using the four legibility measures (minimum letter size for 100% correctness, maximum letter size for 0% correctness, minimum letter size for the least discomfort and maximum letter size for the most discomfort). Ten subjects in each age group read the four letters presented on a slide (letter size varied from 80 pt to 2 pt). Subjects also subjectively rated the reading discomfort of the letters on a 4-point scale (1 = no discomfort, 4 = most discomfort). According to the ANOVA procedure, age, viewing distance and font type significantly affected the four dependent variables (p < 0.05), while the main effect of colour contrast was not statistically significant for any measures. Two-syllable letters had smaller letters than one-syllable letters in the two correctness measures. The younger group could see letter sizes two times smaller than the old group could and the viewing distance of 50 cm showed letters about three times smaller than those at a 200 cm viewing distance. The Gothic fonts were smaller than the Ming fonts. Monitors were smaller than paper for correctness and maximum letter size for the most discomfort. From a comparison of the results for correctness and discomfort, people generally preferred larger letter sizes to those that they could read. The findings of this study may provide basic information for setting a global standard of letter size or font type to improve the legibility of characters written in Korean. STATEMENT OF RELEVANCE: Results obtained in this study will provide basic information and guidelines for setting standards of letter size and font type to improve the legibility of characters written in Korean. Also, the results might offer useful information for people who are working on design of visual displays.
76 FR 56949 - Biomass Crop Assistance Program; Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-15
.... ACTION: Interim rule; correction. SUMMARY: The Commodity Credit Corporation (CCC) is amending the Biomass... funds in favor of the ``project area'' portion of BCAP. CCC is also correcting errors in the regulation... INFORMATION: Background CCC published a final rule on October 27, 2010 (75 FR 66202-66243) implementing BCAP...
Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M
2010-03-15
A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.
COBE ground segment gyro calibration
NASA Technical Reports Server (NTRS)
Freedman, I.; Kumar, V. K.; Rae, A.; Venkataraman, R.; Patt, F. S.; Wright, E. L.
1991-01-01
Discussed here is the calibration of the scale factors and rate biases for the Cosmic Background Explorer (COBE) spacecraft gyroscopes, with the emphasis on the adaptation for COBE of an algorithm previously developed for the Solar Maximum Mission. Detailed choice of parameters, convergence, verification, and use of the algorithm in an environment where the reference attitudes are determined form the Sun, Earth, and star observations (via the Diffuse Infrared Background Experiment (DIRBE) are considered. Results of some recent experiments are given. These include tests where the gyro rate data are corrected for the effect of the gyro baseplate temperature on the spacecraft electronics.
Neyman Pearson detection of K-distributed random variables
NASA Astrophysics Data System (ADS)
Tucker, J. Derek; Azimi-Sadjadi, Mahmood R.
2010-04-01
In this paper a new detection method for sonar imagery is developed in K-distributed background clutter. The equation for the log-likelihood is derived and compared to the corresponding counterparts derived for the Gaussian and Rayleigh assumptions. Test results of the proposed method on a data set of synthetic underwater sonar images is also presented. This database contains images with targets of different shapes inserted into backgrounds generated using a correlated K-distributed model. Results illustrating the effectiveness of the K-distributed detector are presented in terms of probability of detection, false alarm, and correct classification rates for various bottom clutter scenarios.
Negative Refraction Angular Characterization in One-Dimensional Photonic Crystals
Lugo, Jesus Eduardo; Doti, Rafael; Faubert, Jocelyn
2011-01-01
Background Photonic crystals are artificial structures that have periodic dielectric components with different refractive indices. Under certain conditions, they abnormally refract the light, a phenomenon called negative refraction. Here we experimentally characterize negative refraction in a one dimensional photonic crystal structure; near the low frequency edge of the fourth photonic bandgap. We compare the experimental results with current theory and a theory based on the group velocity developed here. We also analytically derived the negative refraction correctness condition that gives the angular region where negative refraction occurs. Methodology/Principal Findings By using standard photonic techniques we experimentally determined the relationship between incidence and negative refraction angles and found the negative refraction range by applying the correctness condition. In order to compare both theories with experimental results an output refraction correction was utilized. The correction uses Snell's law and an effective refractive index based on two effective dielectric constants. We found good agreement between experiment and both theories in the negative refraction zone. Conclusions/Significance Since both theories and the experimental observations agreed well in the negative refraction region, we can use both negative refraction theories plus the output correction to predict negative refraction angles. This can be very useful from a practical point of view for space filtering applications such as a photonic demultiplexer or for sensing applications. PMID:21494332
Scalar and tensor perturbations in loop quantum cosmology: high-order corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Tao; Wang, Anzhong; Wu, Qiang
2015-10-01
Loop quantum cosmology (LQC) provides promising resolutions to the trans-Planckian issue and initial singularity arising in the inflationary models of general relativity. In general, due to different quantization approaches, LQC involves two types of quantum corrections, the holonomy and inverse-volume, to both of the cosmological background evolution and perturbations. In this paper, using the third-order uniform asymptotic approximations, we derive explicitly the observational quantities of the slow-roll inflation in the framework of LQC with these quantum corrections. We calculate the power spectra, spectral indices, and running of the spectral indices for both scalar and tensor perturbations, whereby the tensor-to-scalar ratiomore » is obtained. We expand all the observables at the time when the inflationary mode crosses the Hubble horizon. As the upper error bounds for the uniform asymptotic approximation at the third-order are ∼< 0.15%, these results represent the most accurate results obtained so far in the literature. It is also shown that with the inverse-volume corrections, both scalar and tensor spectra exhibit a deviation from the usual shape at large scales. Then, using the Planck, BAO and SN data we obtain new constraints on quantum gravitational effects from LQC corrections, and find that such effects could be within the detection of the forthcoming experiments.« less
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds
NASA Astrophysics Data System (ADS)
Mitra, Arpita
2017-12-01
The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.
Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging.
Carasso, Alfred S; Vladár, András E
2014-01-01
This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by 'slow motion' low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected 'fast scan' frames. The paper includes software routines, written in Interactive Data Language (IDL),(1) that can perform the above image processing tasks.
CMB-lensing beyond the Born approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marozzi, Giovanni; Fanizza, Giuseppe; Durrer, Ruth
2016-09-01
We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles ℓ ∼< 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussianmore » nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.« less
Pregnancy and Parenting Support for Incarcerated Women: Lessons Learned
Shlafer, Rebecca J.; Gerrity, Erica; Duwe, Grant
2017-01-01
Background There are more than 200,000 incarcerated women in U.S. prisons and jails, and it is estimated that 6% to 10% are pregnant. Pregnant incarcerated women experience complex risks that can compromise their health and the health of their offspring. Objectives Identify lessons learned from a community–university pilot study of a prison-based pregnancy and parenting support program. Methods A community–university–corrections partnership was formed to provide education and support to pregnant incarcerated women through a prison-based pilot program. Evaluation data assessed women’s physical and mental health concerns and satisfaction with the program. Between October 2011 and December 2012, 48 women participated. Lessons Learned We learned that providing services for pregnant incarcerated women requires an effective partnership with the Department of Corrections, adaptations to traditional community-based participatory research (CBPR) approaches, and resources that support both direct service and ongoing evaluation. Conclusions Effective services for pregnant incarcerated women can be provided through a successful community– university–corrections partnership. PMID:26548788
NASA Astrophysics Data System (ADS)
Vickers, H.; Baddeley, L.
2011-11-01
RF heating of the F region plasma at high latitudes has long been known to produce electron temperature increases that can vary from tens to hundreds of percent above the background, unperturbed level. In contrast, artificial ionospheric modification experiments conducted using the Space Plasma Exploration by Active Radar (SPEAR) heating facility on Svalbard have often failed to produce obvious enhancements in the electron temperatures when measured using the European Incoherent Scatter Svalbard radar (ESR), colocated with the heater. Contamination of the ESR ion line spectra by the zero-frequency purely growing mode (PGM) feature is known to persist at varying amplitudes throughout SPEAR heating, and such spectral features can lead to significant temperature underestimations when the incoherent scatter spectra are analyzed using conventional methods. In this study, we present the first results of applying a recently developed technique to correct the PGM-contaminated spectra to SPEAR-enhanced ESR spectra and derive an alternative estimate of the SPEAR-heated electron temperature. We discuss how the effectiveness of the spectrum corrections can be affected by the data variance, estimated over the integration period. The subsequent electron temperatures, inferred from corrected spectra, range from a few tens to a few hundred Kelvin above the average background temperature. These temperatures are found to be in reasonable agreement with the theoretical “enhanced” temperature, calculated for the peak of the stationary temperature perturbation profile, when realistic absorption effects are accounted for.
Lewis, Ashley Glen; Schriefers, Herbert; Bastiaansen, Marcel; Schoffelen, Jan-Mathijs
2018-05-21
Reinstatement of memory-related neural activity measured with high temporal precision potentially provides a useful index for real-time monitoring of the timing of activation of memory content during cognitive processing. The utility of such an index extends to any situation where one is interested in the (relative) timing of activation of different sources of information in memory, a paradigm case of which is tracking lexical activation during language processing. Essential for this approach is that memory reinstatement effects are robust, so that their absence (in the average) definitively indicates that no lexical activation is present. We used electroencephalography to test the robustness of a reported subsequent memory finding involving reinstatement of frequency-specific entrained oscillatory brain activity during subsequent recognition. Participants learned lists of words presented on a background flickering at either 6 or 15 Hz to entrain a steady-state brain response. Target words subsequently presented on a non-flickering background that were correctly identified as previously seen exhibited reinstatement effects at both entrainment frequencies. Reliability of these statistical inferences was however critically dependent on the approach used for multiple comparisons correction. We conclude that effects are not robust enough to be used as a reliable index of lexical activation during language processing.
Advanced Demonstration of Motion Correction for Ship-to-Ship Passive Inspections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziock, Klaus-Peter; Boehnen, Chris Bensing; Ernst, Joseph
2013-09-30
Passive radiation detection is a key tool for detecting illicit nuclear materials. In maritime applications it is most effective against small vessels where attenuation is of less concern. Passive imaging provides: discrimination between localized (threat) and distributed (non-threat) sources, removal of background fluctuations due to nearby shorelines and structures, source localization to an individual craft in crowded waters, and background subtracted spectra. Unfortunately, imaging methods cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing sensitivity. This is particularly true for the smaller water craft where passive inspections aremore » most valuable. In this project we performed tests and improved the performance of an instrument (developed earlier under, “Motion Correction for Ship-to-Ship Passive Inspections”) that uses automated tracking of a target vessel in visible-light images to generate a 3D radiation map of the target vessel from data obtained using a gamma-ray imager.« less
Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin
2015-01-01
Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.
ERIC Educational Resources Information Center
Clarke, Jason; Prescott, Katherine; Milne, Rebecca
2013-01-01
Background: The cognitive interview (CI) has been shown to increase correct memory recall of a diverse range of participant types, without an increase in the number of incorrect or confabulated details. However, it has rarely been examined for use with adults with intellectual disability. Measures and Method: This study compared the memory recall…
ERIC Educational Resources Information Center
Pereira, Valerie J.; Sell, Debbie; Tuomainen, Jyrki
2013-01-01
Background: Abnormal facial growth is a well-known sequelae of cleft lip and palate (CLP) resulting in maxillary retrusion and a class III malocclusion. In 10-50% of cases, surgical correction involving advancement of the maxilla typically by osteotomy methods is required and normally undertaken in adolescence when facial growth is complete.…
ERIC Educational Resources Information Center
FESHBACH, NORMA D.
TO STUDY THE EFFECTS OF DIFFERING TEACHER REINFORCEMENT BEHAVIOR ON STUDENTS, 21 MIDDLE-CLASS AND 12 LOWER-CLASS MALE NINTH- AND 10TH-GRADE REMEDIAL READING STUDENTS WERE SHOWN TWO FILMS. THE FIRST DEPICTED A "POSITIVE" TEACHER WHO CONSISTENTLY REWARDED CORRECT RESPONSES WHILE NEGLECTING INCORRECT ONES, AND THE SECOND SHOWED A…
ERIC Educational Resources Information Center
Krause, Fritz
The effectiveness of a behavior modification program combining cooperative learning with peer and self-evaluation was field tested with a group of 20 students in a 9th-grade class in beginning small engines. The students represented a mix of racial/cultural and economic backgrounds, were of average intelligence, and exhibited a variety of poor…
ERIC Educational Resources Information Center
Kaltakci-Gurel, Derya; Eryilmaz, Ali; McDermott, Lillian Christie
2017-01-01
Background: Correct identification of misconceptions is an important first step in order to gain an understanding of student learning. More recently, four-tier multiple choice tests have been found to be effective in assessing misconceptions. Purpose: The purposes of this study are (1) to develop and validate a four-tier misconception test to…
Analytical-Based Partial Volume Recovery in Mouse Heart Imaging
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; deKemp, Robert A.
2011-02-01
Positron emission tomography (PET) is a powerful imaging modality that has the ability to yield quantitative images of tracer activity. Physical phenomena such as photon scatter, photon attenuation, random coincidences and spatial resolution limit quantification potential and must be corrected to preserve the accuracy of reconstructed images. This study focuses on correcting the partial volume effects that arise in mouse heart imaging when resolution is insufficient to resolve the true tracer distribution in the myocardium. The correction algorithm is based on fitting 1D profiles through the myocardium in gated PET images to derive myocardial contours along with blood, background and myocardial activity. This information is interpolated onto a 2D grid and convolved with the tomograph's point spread function to derive regional recovery coefficients enabling partial volume correction. The point spread function was measured by placing a line source inside a small animal PET scanner. PET simulations were created based on noise properties measured from a reconstructed PET image and on the digital MOBY phantom. The algorithm can estimate the myocardial activity to within 5% of the truth when different wall thicknesses, backgrounds and noise properties are encountered that are typical of healthy FDG mouse scans. The method also significantly improves partial volume recovery in simulated infarcted tissue. The algorithm offers a practical solution to the partial volume problem without the need for co-registered anatomic images and offers a basis for improved quantitative 3D heart imaging.
Xu, Xiaohong; Tay, Yilin; Sim, Bernice; Yoon, Su-In; Huang, Yihui; Ooi, Jolene; Utami, Kagistia Hana; Ziaei, Amin; Ng, Bryan; Radulescu, Carola; Low, Donovan; Ng, Alvin Yu Jin; Loh, Marie; Venkatesh, Byrappa; Ginhoux, Florent; Augustine, George J; Pouladi, Mahmoud A
2017-03-14
Huntington disease (HD) is a dominant neurodegenerative disorder caused by a CAG repeat expansion in HTT. Here we report correction of HD human induced pluripotent stem cells (hiPSCs) using a CRISPR-Cas9 and piggyBac transposon-based approach. We show that both HD and corrected isogenic hiPSCs can be differentiated into excitable, synaptically active forebrain neurons. We further demonstrate that phenotypic abnormalities in HD hiPSC-derived neural cells, including impaired neural rosette formation, increased susceptibility to growth factor withdrawal, and deficits in mitochondrial respiration, are rescued in isogenic controls. Importantly, using genome-wide expression analysis, we show that a number of apparent gene expression differences detected between HD and non-related healthy control lines are absent between HD and corrected lines, suggesting that these differences are likely related to genetic background rather than HD-specific effects. Our study demonstrates correction of HD hiPSCs and associated phenotypic abnormalities, and the importance of isogenic controls for disease modeling using hiPSCs. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Assessing Feedback in a Mobile Videogame
Brand, Leah; Beltran, Alicia; Hughes, Sheryl; O'Connor, Teresia; Baranowski, Janice; Nicklas, Theresa; Chen, Tzu-An; Dadabhoy, Hafza R.; Diep, Cassandra S.; Buday, Richard
2016-01-01
Abstract Background: Player feedback is an important part of serious games, although there is no consensus regarding its delivery or optimal content. “Mommio” is a serious game designed to help mothers motivate their preschoolers to eat vegetables. The purpose of this study was to assess optimal format and content of player feedback for use in “Mommio.” Materials and Methods: The current study posed 36 potential “Mommio” gameplay feedback statements to 20 mothers using a Web survey and interview. Mothers were asked about the meaning and helpfulness of each feedback statement. Results: Several themes emerged upon thematic analysis, including identifying an effective alternative in the case of corrective feedback, avoiding vague wording, using succinct and correct grammar, avoiding provocation of guilt, and clearly identifying why players' game choice was correct or incorrect. Conclusions: Guidelines are proposed for future feedback statements. PMID:27058403
Number-counts slope estimation in the presence of Poisson noise
NASA Technical Reports Server (NTRS)
Schmitt, Juergen H. M. M.; Maccacaro, Tommaso
1986-01-01
The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.
NASA Astrophysics Data System (ADS)
Lim, Jeong Sik; Park, Miyeon; Lee, Jinbok; Lee, Jeongsoon
2017-12-01
The effect of background gas composition on the measurement of CO2 levels was investigated by wavelength-scanned cavity ring-down spectrometry (WS-CRDS) employing a spectral line centered at the R(1) of the (3 00 1)III ← (0 0 0) band. For this purpose, eight cylinders with various gas compositions were gravimetrically and volumetrically prepared within 2σ = 0.1 %, and these gas mixtures were introduced into the WS-CRDS analyzer calibrated against standards of ambient air composition. Depending on the gas composition, deviations between CRDS-determined and gravimetrically (or volumetrically) assigned CO2 concentrations ranged from -9.77 to 5.36 µmol mol-1, e.g., excess N2 exhibited a negative deviation, whereas excess Ar showed a positive one. The total pressure broadening coefficients (TPBCs) obtained from the composition of N2, O2, and Ar thoroughly corrected the deviations up to -0.5 to 0.6 µmol mol-1, while these values were -0.43 to 1.43 µmol mol-1 considering PBCs induced by only N2. The use of TPBC enhanced deviations to be corrected to ˜ 0.15 %. Furthermore, the above correction linearly shifted CRDS responses for a large extent of TPBCs ranging from 0.065 to 0.081 cm-1 atm-1. Thus, accurate measurements using optical intensity-based techniques such as WS-CRDS require TPBC-based instrument calibration or use standards prepared in the same background composition of ambient air.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Elsayed
Purpose: To characterize and correct for radiation-induced background (RIB) observed in the signals from a class of scanning water tanks. Methods: A method was developed to isolate the RIB through detector measurements in the background-free linac console area. Variation of the RIB against a large number of parameters was characterized, and its impact on basic clinical data for photon and electron beams was quantified. Different methods to minimize and/or correct for the RIB were proposed and evaluated. Results: The RIB is due to the presence of the electrometer and connection box in a low background radiation field (by design). Themore » absolute RIB current with a biased detector is up to 2 pA, independent of the detector size, which is 0.6% and 1.5% of the central axis reference signal for a standard and a mini scanning chamber, respectively. The RIB monotonically increases with field size, is three times smaller for detectors that do not require a bias (e.g., diodes), is up to 80% larger for positive (versus negative) polarity, decreases with increasing photon energy, exhibits a single curve versus dose rate at the electrometer location, and is negligible for electron beams. Data after the proposed field-size correction method agree with point measurements from an independent system to within a few tenth of a percent for output factor, head scatter, depth dose at depth, and out-of-field profile dose. Manufacturer recommendations for electrometer placement are insufficient and sometimes incorrect. Conclusions: RIB in scanning water tanks can have a non-negligible effect on dosimetric data.« less
Pigeons Exhibit Contextual Cueing to Both Simple and Complex Backgrounds
Wasserman, Edward A.; Teng, Yuejia; Castro, Leyre
2014-01-01
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of this contextual cueing effect using a novel Cueing-Miscueing design. Pigeons had to peck a target which could appear in one of four possible locations on four possible color backgrounds or four possible color photographs of real-world scenes. On 80% of the trials, each of the contexts was uniquely paired with one of the target locations; on the other 20% of the trials, each of the contexts was randomly paired with the remaining target locations. Pigeons came to exhibit robust contextual cueing when the context preceded the target by 2 s, with reaction times to the target being shorter on correctly-cued trials than on incorrectly-cued trials. Contextual cueing proved to be more robust with photographic backgrounds than with uniformly colored backgrounds. In addition, during the context-target delay, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. These findings confirm the effectiveness of animal models of contextual cueing and underscore the important part played by associative learning in producing the effect. PMID:24491468
Coupé, Veerle M. H.; Knottnerus, Bart J.; Geerlings, Suzanne E.; Moll van Charante, Eric P.; ter Riet, Gerben
2017-01-01
Background Uncomplicated Urinary Tract Infections (UTIs) are common in primary care resulting in substantial costs. Since antimicrobial resistance against antibiotics for UTIs is rising, accurate diagnosis is needed in settings with low rates of multidrug-resistant bacteria. Objective To compare the cost-effectiveness of different strategies to diagnose UTIs in women who contacted their general practitioner (GP) with painful and/or frequent micturition between 2006 and 2008 in and around Amsterdam, The Netherlands. Methods This is a model-based cost-effectiveness analysis using data from 196 women who underwent four tests: history, urine stick, sediment, dipslide, and the gold standard, a urine culture. Decision trees were constructed reflecting 15 diagnostic strategies comprising different parallel and sequential combinations of the four tests. Using the decision trees, for each strategy the costs and the proportion of women with a correct positive or negative diagnosis were estimated. Probabilistic sensitivity analysis was used to estimate uncertainty surrounding costs and effects. Uncertainty was presented using cost-effectiveness planes and acceptability curves. Results Most sequential testing strategies resulted in higher proportions of correctly classified women and lower costs than parallel testing strategies. For different willingness to pay thresholds, the most cost-effective strategies were: 1) performing a dipstick after a positive history for thresholds below €10 per additional correctly classified patient, 2) performing both a history and dipstick for thresholds between €10 and €17 per additional correctly classified patient, 3) performing a dipstick if history was negative, followed by a sediment if the dipstick was negative for thresholds between €17 and €118 per additional correctly classified patient, 4) performing a dipstick if history was negative, followed by a dipslide if the dipstick was negative for thresholds above €118 per additional correctly classified patient. Conclusion Depending on decision makers’ willingness to pay for one additional correctly classified woman, the strategy consisting of performing a history and dipstick simultaneously (ceiling ratios between €10 and €17) or performing a sediment if history and subsequent dipstick are negative (ceiling ratios between €17 and €118) are the most cost-effective strategies to diagnose a UTI. PMID:29186185
NASA Astrophysics Data System (ADS)
Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe
2017-10-01
Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.
Effect of sample stratification on dairy GWAS results
2012-01-01
Background Artificial insemination and genetic selection are major factors contributing to population stratification in dairy cattle. In this study, we analyzed the effect of sample stratification and the effect of stratification correction on results of a dairy genome-wide association study (GWAS). Three methods for stratification correction were used: the efficient mixed-model association expedited (EMMAX) method accounting for correlation among all individuals, a generalized least squares (GLS) method based on half-sib intraclass correlation, and a principal component analysis (PCA) approach. Results Historical pedigree data revealed that the 1,654 contemporary cows in the GWAS were all related when traced through approximately 10–15 generations of ancestors. Genome and phenotype stratifications had a striking overlap with the half-sib structure. A large elite half-sib family of cows contributed to the detection of favorable alleles that had low frequencies in the general population and high frequencies in the elite cows and contributed to the detection of X chromosome effects. All three methods for stratification correction reduced the number of significant effects. EMMAX method had the most severe reduction in the number of significant effects, and the PCA method using 20 principal components and GLS had similar significance levels. Removal of the elite cows from the analysis without using stratification correction removed many effects that were also removed by the three methods for stratification correction, indicating that stratification correction could have removed some true effects due to the elite cows. SNP effects with good consensus between different methods and effect size distributions from USDA’s Holstein genomic evaluation included the DGAT1-NIBP region of BTA14 for production traits, a SNP 45kb upstream from PIGY on BTA6 and two SNPs in NIBP on BTA14 for protein percentage. However, most of these consensus effects had similar frequencies in the elite and average cows. Conclusions Genetic selection and extensive use of artificial insemination contributed to overlapped genome, pedigree and phenotype stratifications. The presence of an elite cluster of cows was related to the detection of rare favorable alleles that had high frequencies in the elite cluster and low frequencies in the remaining cows. Methods for stratification correction could have removed some true effects associated with genetic selection. PMID:23039970
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagler, Peter C.; Tucker, Gregory S.; Fixsen, Dale J.
The detection of the primordial B-mode polarization signal of the cosmic microwave background (CMB) would provide evidence for inflation. Yet as has become increasingly clear, the detection of a such a faint signal requires an instrument with both wide frequency coverage to reject foregrounds and excellent control over instrumental systematic effects. Using a polarizing Fourier transform spectrometer (FTS) for CMB observations meets both of these requirements. In this work, we present an analysis of instrumental systematic effects in polarizing FTSs, using the Primordial Inflation Explorer (PIXIE) as a worked example. We analytically solve for the most important systematic effects inherentmore » to the FTS—emissive optical components, misaligned optical components, sampling and phase errors, and spin synchronous effects—and demonstrate that residual systematic error terms after corrections will all be at the sub-nK level, well below the predicted 100 nK B-mode signal.« less
Resuscitator’s perceptions and time for corrective ventilation steps during neonatal resuscitation☆
Sharma, Vinay; Lakshminrusimha, Satyan; Carrion, Vivien; Mathew, Bobby
2016-01-01
Background The 2010 neonatal resuscitation program (NRP) guidelines incorporate ventilation corrective steps (using the mnemonic – MRSOPA) into the resuscitation algorithm. The perception of neonatal providers, time taken to perform these maneuvers or the effectiveness of these additional steps has not been evaluated. Methods Using two simulated clinical scenarios of varying degrees of cardiovascular compromise –perinatal asphyxia with (i) bradycardia (heart rate – 40 min−1) and (ii) cardiac arrest, 35 NRP certified providers were evaluated for preference to performing these corrective measures, the time taken for performing these steps and time to onset of chest compressions. Results The average time taken to perform ventilation corrective steps (MRSOPA) was 48.9 ± 21.4 s. Providers were less likely to perform corrective steps and proceed directly to endotracheal intubation in the scenario of cardiac arrest as compared to a state of bradycardia. Cardiac compressions were initiated significantly sooner in the scenario of cardiac arrest 89 ± 24 s as compared to severe bradycardia 122 ± 23 s, p < 0.0001. There were no differences in the time taken to initiation of chest compressions between physicians or mid-level care providers or with the level of experience of the provider. Conclusions Effective ventilation of the lungs with corrective steps using a mask is important in most cases of neonatal resuscitation. Neonatal resuscitators prefer early endotracheal intubation and initiation of chest compressions in the presence of asystolic cardiac arrest. Corrective ventilation steps can potentially postpone initiation of chest compressions and may delay return of spontaneous circulation in the presence of severe cardiovascular compromise. PMID:25796996
Lochbuehler, Kirsten; Tang, Kathy Z.; Souprountchouk, Valentina; Campetti, Dana; Cappella, Joseph N.; Kozlowski, Lynn T.; Strasser, Andrew A.
2016-01-01
Background Tobacco companies have deliberately used explicit and implicit misleading information in marketing campaigns. The aim of the current study was to experimentally investigate whether the editing of explicit and implicit content of a print advertisement improves smokers’ risk beliefs and smokers’ knowledge of explicit and implicit information. Methods Using a 2(explicit/implicit) x 2(accurate/misleading) between-subject design, 203 smokers were randomly assigned to one of four advertisement conditions. The manipulation of graphic content was examined as an implicit factor to convey product harm. The inclusion of a text corrective in the body of the ad was defined as the manipulated explicit factor. Participants’ eye movements and risk beliefs/recall were measured during and after ad exposure, respectively. Results Results indicate that exposure to a text corrective decreases false beliefs about the product (p < .01) and improves correct recall of information provided by the corrective (p < .05). Accurate graphic content did not alter the harmfulness of the product. Independent of condition, smokers who focused longer on the warning label made fewer false inferences about the product (p = .01) and were more likely to correctly recall the warning information (p < .01). Nonetheless, most smokers largely ignored the text warning. Conclusions Embedding a corrective statement in the body of the ad is an effective strategy to convey health information to consumers, which can be mandated under the Tobacco Control Act (2009). Eye-tracking results objectively demonstrate that text-only warnings are not viewed by smokers, thus minimizing their effectiveness for conveying risk information. PMID:27160034
Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging
Carasso, Alfred S; Vladár, András E
2014-01-01
This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by ‘slow motion’ low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected ‘fast scan’ frames. The paper includes software routines, written in Interactive Data Language (IDL),1 that can perform the above image processing tasks. PMID:26601050
2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saur, Sigrun; Frengen, Jomar; Department of Oncology and Radiotherapy, St. Olavs University Hospital, N-7006 Trondheim
Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scansmore » of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16x16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.« less
Complete NLO corrections to W+W+ scattering and its irreducible background at the LHC
NASA Astrophysics Data System (ADS)
Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu
2017-10-01
The process pp → μ +ν μ e+νejj receives several contributions of different orders in the strong and electroweak coupling constants. Using appropriate event selections, this process is dominated by vector-boson scattering (VBS) and has recently been measured at the LHC. It is thus of prime importance to estimate precisely each contribution. In this article we compute for the first time the full NLO QCD and electroweak corrections to VBS and its irreducible background processes with realistic experimental cuts. We do not rely on approximations but use complete amplitudes involving two different orders at tree level and three different orders at one-loop level. Since we take into account all interferences, at NLO level the corrections to the VBS process and to the QCD-induced irreducible background process contribute at the same orders. Hence the two processes cannot be unambiguously distinguished, and all contributions to the μ +ν μ e+νejj final state should be preferably measured together.
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
Three site Higgsless model at one loop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chivukula, R. Sekhar; Simmons, Elizabeth H.; Matsuzaki, Shinya
2007-04-01
In this paper we compute the one loop chiral-logarithmic corrections to all O(p{sup 4}) counterterms in the three site Higgsless model. The calculation is performed using the background field method for both the chiral and gauge fields, and using Landau gauge for the quantum fluctuations of the gauge fields. The results agree with our previous calculations of the chiral-logarithmic corrections to the S and T parameters in 't Hooft-Feynman gauge. The work reported here includes a complete evaluation of all one loop divergences in an SU(2)xU(1) nonlinear sigma model, corresponding to an electroweak effective Lagrangian in the absence of custodialmore » symmetry.« less
Superhorizon electromagnetic field background from Higgs loops in inflation
NASA Astrophysics Data System (ADS)
Kaya, Ali
2018-03-01
If Higgs is a spectator scalar, i.e. if it is not directly coupled to the inflaton, superhorizon Higgs modes must have been exited during inflation. Since Higgs is unstable its decay into photons is expected to seed superhorizon photon modes. We use in-in perturbation theory to show that this naive physical expectation is indeed fulfilled via loop effects. Specifically, we calculate the first order Higgs loop correction to the magnetic field power spectrum evaluated at some late time after inflation. It turns out that this loop correction becomes much larger than the tree-level power spectrum at the superhorizon scales. This suggests a mechanism to generate cosmologically interesting superhorizon vector modes by scalar-vector interactions.
Photometry of the 'Seyfert Sextet' /VV 115/ and the anonymous galaxy 1558.2 + 2100
NASA Technical Reports Server (NTRS)
Martins, D. H.; Chincarini, G.
1976-01-01
Photometric observations of the Seyfert Sextet (VV 115) are analyzed. Apparent integrated magnitudes are derived relative to the sky brightness, and isophotal maps are given for the field. No evidence for interaction between NGC 6027 and d is found. Luminosity profiles are given for NGC 6027, a, b, and d, with the d profile having been corrected for seeing effects in one dimension. The corrected profile parameters favor the interpretation of d as a highly luminous background galaxy at its cosmological distance. The nearby anonymous galaxy 1558.2 + 2100 is similarly studied, with no clear evidence of photometric peculiarities detected. Its interaction with the Seyfert Sextet appears to be excluded.
PREDICTION METRICS FOR CHEMICAL DETECTION IN LONG-WAVE INFRARED HYPERSPECTRAL IMAGERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chilton, M.; Walsh, S.J.; Daly, D.S.
2009-01-01
Natural and man-made chemical processes generate gaseous plumes that may be detected by hyperspectral imaging, which produces a matrix of spectra affected by the chemical constituents of the plume, the atmosphere, the bounding background surface and instrument noise. A physics-based model of observed radiance shows that high chemical absorbance and low background emissivity result in a larger chemical signature. Using simulated hyperspectral imagery, this study investigated two metrics which exploited this relationship. The objective was to explore how well the chosen metrics predicted when a chemical would be more easily detected when comparing one background type to another. The twomore » predictor metrics correctly rank ordered the backgrounds for about 94% of the chemicals tested as compared to the background rank orders from Whitened Matched Filtering (a detection algorithm) of the simulated spectra. These results suggest that the metrics provide a reasonable summary of how the background emissivity and chemical absorbance interact to produce the at-sensor chemical signal. This study suggests that similarly effective predictors that account for more general physical conditions may be derived.« less
Nonlinear responses of chiral fluids from kinetic theory
NASA Astrophysics Data System (ADS)
Hidaka, Yoshimasa; Pu, Shi; Yang, Di-Lun
2018-01-01
The second-order nonlinear responses of inviscid chiral fluids near local equilibrium are investigated by applying the chiral kinetic theory (CKT) incorporating side-jump effects. It is shown that the local equilibrium distribution function can be nontrivially introduced in a comoving frame with respect to the fluid velocity when the quantum corrections in collisions are involved. For the study of anomalous transport, contributions from both quantum corrections in anomalous hydrodynamic equations of motion and those from the CKT and Wigner functions are considered under the relaxation-time (RT) approximation, which result in anomalous charge Hall currents propagating along the cross product of the background electric field and the temperature (or chemical-potential) gradient and of the temperature and chemical-potential gradients. On the other hand, the nonlinear quantum correction on the charge density vanishes in the classical RT approximation, which in fact satisfies the matching condition given by the anomalous equation obtained from the CKT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröb, Markus B.; Verdaguer, Enric, E-mail: mfroeb@itp.uni-leipzig.de, E-mail: enric.verdaguer@ub.edu
We derive the leading quantum corrections to the gravitational potentials in a de Sitter background, due to the vacuum polarization from loops of conformal fields. Our results are valid for arbitrary conformal theories, even strongly interacting ones, and are expressed using the coefficients b and b' appearing in the trace anomaly. Apart from the de Sitter generalization of the known flat-space results, we find two additional contributions: one which depends on the finite coefficients of terms quadratic in the curvature appearing in the renormalized effective action, and one which grows logarithmically with physical distance. While the first contribution corresponds tomore » a rescaling of the effective mass, the second contribution leads to a faster fall-off of the Newton potential at large distances, and is potentially measurable.« less
Leon-Bejarano, Maritza; Dorantes-Mendez, Guadalupe; Ramirez-Elias, Miguel; Mendez, Martin O; Alba, Alfonso; Rodriguez-Leyva, Ildefonso; Jimenez, M
2016-08-01
Raman spectroscopy of biological tissue presents fluorescence background, an undesirable effect that generates false Raman intensities. This paper proposes the application of the Empirical Mode Decomposition (EMD) method to baseline correction. EMD is a suitable approach since it is an adaptive signal processing method for nonlinear and non-stationary signal analysis that does not require parameters selection such as polynomial methods. EMD performance was assessed through synthetic Raman spectra with different signal to noise ratio (SNR). The correlation coefficient between synthetic Raman spectra and the recovered one after EMD denoising was higher than 0.92. Additionally, twenty Raman spectra from skin were used to evaluate EMD performance and the results were compared with Vancouver Raman algorithm (VRA). The comparison resulted in a mean square error (MSE) of 0.001554. High correlation coefficient using synthetic spectra and low MSE in the comparison between EMD and VRA suggest that EMD could be an effective method to remove fluorescence background in biological Raman spectra.
Adult Smokers' Responses to “Corrective Statements” Regarding Tobacco Industry Deception
Kollath-Cattano, Christy L.; Abad-Vivero, Erika N.; Thrasher, James F.; Bansal-Travers, Maansi; O'Connor, Richard J.; Krugman, Dean M.; Berg, Carla J.; Hardin, James W.
2014-01-01
Background To inform consumers, U.S. Federal Courts have ordered the tobacco industry to disseminate “corrective statements” (CSs) about their deception regarding five topics: smoker health effects, nonsmoker health effects, cigarette addictiveness, design of cigarettes to increase addiction, and relative safety of light cigarettes. Purpose To determine how smokers from diverse backgrounds respond to the final, court-mandated wording of these CSs. Methods Data were analyzed from an online consumer panel of 1,404 adult smokers who evaluated one of five CS topics (n=280–281) by reporting novelty, relevance, anger at the industry, and motivation to quit because of the CS. Logistic and linear regression models assessed main and interactive effects of race/ethnicity, gender, education, and CS topic on these responses. Data were collected in January 2013 and analyzed in March 2013. Results Thirty percent to 54% of participants reported that each CS provided novel information, and novelty was associated with greater relevance, anger at the industry, and motivation to quit because of the message. African Americans and Latinos were more likely than non-Hispanic whites to report that CSs were novel, and they had stronger responses to CSs across all indicators. Compared to men, women reported that CSs were more relevant and motivated them to quit. Conclusions This study suggests that smokers would value and respond to CSs, particularly smokers from groups that suffer from tobacco–related health disparities. PMID:24746372
The use of polymethyl-methacrylate (Artecoll) as an adjunct to facial reconstruction
Mok, David; Schwarz, Jorge
2004-01-01
BACKGROUND: Injectable polymethyl-methacrylate (PMMA) microspheres, or Artecoll, has been used for the last few years in aesthetic surgery as long-term tissue filler for the correction of wrinkles and for lip augmentation. This paper presents three cases of the use of PMMA microsphere injection for reconstructive patients with defects of varying etiologies. These cases provide examples of a novel adjunct to the repertoire of the reconstructive surgeon. OBJECTIVES: To evaluate the effectiveness (short- and long-term) of PMMA injection for the correction of small soft tissue defects of the face. METHODS: Three case histories are presented. They include the origin of the defect; previous reconstructions of the defect; and area, volume, timing and technical particularities of PMMA administration. RESULTS: All three cases showed improvement of the defect with the PMMA injection with respect to both objective evidence and patient satisfaction. The improvements can still be seen after several years. CONCLUSIONS: PMMA microsphere injection can be effectively used to correct selected small facial defects in reconstructive cases and the results are long lasting. PMID:24115873
Software Compensates Electronic-Nose Readings for Humidity
NASA Technical Reports Server (NTRS)
Zhou, Hanying
2007-01-01
A computer program corrects for the effects of humidity on the readouts of an array of chemical sensors (an "electronic nose"). To enable the use of this program, the array must incorporate an independent humidity sensor in addition to sensors designed to detect analytes other than water vapor. The basic principle of the program was described in "Compensating for Effects of Humidity on Electronic Noses" (NPO-30615), NASA Tech Briefs, Vol. 28, No. 6 (June 2004), page 63. To recapitulate: The output of the humidity sensor is used to generate values that are subtracted from the outputs of the other sensors to correct for contributions of humidity to those readings. Hence, in principle, what remains after corrections are the contributions of the analytes only. The outputs of the non-humidity sensors are then deconvolved to obtain the concentrations of the analytes. In addition, the humidity reading is retained as an analyte reading in its own right. This subtraction of the humidity background increases the ability of the software to identify such events as spills in which contaminants may be present in small concentrations and accompanied by large changes in humidity.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen
2017-07-27
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen
2017-01-01
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450
A comprehensive numerical analysis of background phase correction with V-SHARP.
Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand
2017-04-01
Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cai, Zhijian; Zou, Wenlong; Wu, Jianhong
2017-10-01
Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.
Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.
2011-01-01
Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
Arkin, Adam P.
2015-01-01
ABSTRACT Free-living bacteria are usually thought to have large effective population sizes, and so tiny selective differences can drive their evolution. However, because recombination is infrequent, “background selection” against slightly deleterious alleles should reduce the effective population size (Ne) by orders of magnitude. For example, for a well-mixed population with 1012 individuals and a typical level of homologous recombination (r/m = 3, i.e., nucleotide changes due to recombination [r] occur at 3 times the mutation rate [m]), we predict that Ne is <107. An argument for high Ne values for bacteria has been the high genetic diversity within many bacterial “species,” but this diversity may be due to population structure: diversity across subpopulations can be far higher than diversity within a subpopulation, which makes it difficult to estimate Ne correctly. Given an estimate of Ne, standard population genetics models imply that selection should be sufficient to drive evolution if Ne × s is >1, where s is the selection coefficient. We found that this remains approximately correct if background selection is occurring or when population structure is present. Overall, we predict that even for free-living bacteria with enormous populations, natural selection is only a significant force if s is above 10−7 or so. PMID:26670382
FIRST-ORDER COSMOLOGICAL PERTURBATIONS ENGENDERED BY POINT-LIKE MASSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eingorn, Maxim, E-mail: maxim.eingorn@gmail.com
2016-07-10
In the framework of the concordance cosmological model, the first-order scalar and vector perturbations of the homogeneous background are derived in the weak gravitational field limit without any supplementary approximations. The sources of these perturbations (inhomogeneities) are presented in the discrete form of a system of separate point-like gravitating masses. The expressions found for the metric corrections are valid at all (sub-horizon and super-horizon) scales and converge at all points except at the locations of the sources. The average values of these metric corrections are zero (thus, first-order backreaction effects are absent). Both the Minkowski background limit and the Newtonianmore » cosmological approximation are reached under certain well-defined conditions. An important feature of the velocity-independent part of the scalar perturbation is revealed: up to an additive constant, this part represents a sum of Yukawa potentials produced by inhomogeneities with the same finite time-dependent Yukawa interaction range. The suggested connection between this range and the homogeneity scale is briefly discussed along with other possible physical implications.« less
Brane Inflation, Solitons and Cosmological Solutions: I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, P.
2005-01-25
In this paper we study various cosmological solutions for a D3/D7 system directly from M-theory with fluxes and M2-branes. In M-theory, these solutions exist only if we incorporate higher derivative corrections from the curvatures as well as G-fluxes. We take these corrections into account and study a number of toy cosmologies, including one with a novel background for the D3/D7 system whose supergravity solution can be completely determined. Our new background preserves all the good properties of the original model and opens up avenues to investigate cosmological effects from wrapped branes and brane-antibrane annihilation, to name a few. We alsomore » discuss in some detail semilocal defects with higher global symmetries, for example exceptional ones, that occur in a slightly different regime of our D3/D7 model. We show that the D3/D7 system does have the required ingredients to realize these configurations as non-topological solitons of the theory. These constructions also allow us to give a physical meaning to the existence of certain underlying homogeneous quaternionic Kahler manifolds.« less
The effect of illustrations on patient comprehension of medication instruction labels
Hwang, Stephen W; Tram, Carolyn QN; Knarr, Nadia
2005-01-01
Background Labels with special instructions regarding how a prescription medication should be taken or its possible side effects are often applied to pill bottles. The goal of this study was to determine whether the addition of illustrations to these labels affects patient comprehension. Methods Study participants (N = 130) were enrolled by approaching patients at three family practice clinics in Toronto, Canada. Participants were asked to interpret two sets of medication instruction labels, the first with text only and the second with the same text accompanied by illustrations. Two investigators coded participants' responses as incorrect, partially correct, or completely correct. Health literacy levels of participants were measured using a validated instrument, the REALM test. Results All participants gave a completely correct interpretation for three out of five instruction labels, regardless of whether illustrations were present or not. For the two most complex labels, only 34–55% of interpretations of the text-only version were completely correct. The addition of illustrations was associated with improved performance in 5–7% of subjects and worsened performance in 7–9% of subjects. Conclusion The commonly-used illustrations on the medication labels used in this study were of little or no use in improving patients' comprehension of the accompanying written instructions. PMID:15960849
A Binary Offset Effect in CCD Readout and Its Impact on Astronomical Data
NASA Astrophysics Data System (ADS)
Boone, K.; Aldering, G.; Copin, Y.; Dixon, S.; Domagalski, R. S.; Gangler, E.; Pecontal, E.; Perlmutter, S.
2018-06-01
We have discovered an anomalous behavior of CCD readout electronics that affects their use in many astronomical applications. An offset in the digitization of the CCD output voltage that depends on the binary encoding of one pixel is added to pixels that are read out one, two, and/or three pixels later. One result of this effect is the introduction of a differential offset in the background when comparing regions with and without flux from science targets. Conventional data reduction methods do not correct for this offset. We find this effect in 16 of 22 instruments investigated, covering a variety of telescopes and many different front-end electronics systems. The affected instruments include LRIS and DEIMOS on the Keck telescopes, WFC3 UVIS and STIS on HST, MegaCam on CFHT, SNIFS on the UH88 telescope, GMOS on the Gemini telescopes, HSC on Subaru, and FORS on VLT. The amplitude of the introduced offset is up to 4.5 ADU per pixel, and it is not directly proportional to the measured ADU level. We have developed a model that can be used to detect this “binary offset effect” in data, and correct for it. Understanding how data are affected and applying a correction for the effect is essential for precise astronomical measurements.
Improved electron probe microanalysis of trace elements in quartz
Donovan, John J.; Lowers, Heather; Rusk, Brian G.
2011-01-01
Quartz occurs in a wide range of geologic environments throughout the Earth's crust. The concentration and distribution of trace elements in quartz provide information such as temperature and other physical conditions of formation. Trace element analyses with modern electron-probe microanalysis (EPMA) instruments can achieve 99% confidence detection of ~100 ppm with fairly minimal effort for many elements in samples of low to moderate average atomic number such as many common oxides and silicates. However, trace element measurements below 100 ppm in many materials are limited, not only by the precision of the background measurement, but also by the accuracy with which background levels are determined. A new "blank" correction algorithm has been developed and tested on both Cameca and JEOL instruments, which applies a quantitative correction to the emitted X-ray intensities during the iteration of the sample matrix correction based on a zero level (or known trace) abundance calibration standard. This iterated blank correction, when combined with improved background fit models, and an "aggregate" intensity calculation utilizing multiple spectrometer intensities in software for greater geometric efficiency, yields a detection limit of 2 to 3 ppm for Ti and 6 to 7 ppm for Al in quartz at 99% t-test confidence with similar levels for absolute accuracy.
ON THE PROPER USE OF THE REDUCED SPEED OF LIGHT APPROXIMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov
I show that the reduced speed of light (RSL) approximation, when used properly (i.e., as originally designed—only for local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the “Cosmic Reionization on Computers” project are insensitive to the adopted value of the RSL for as long as that value does not fall below about 10% of the true speed of light. A recent claim of the failure of the RSL approximation in the Illustris reionization model appears to be due to the effective speed ofmore » light being reduced in the equation for the cosmic background too and hence illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less
Schaufele, Fred
2013-01-01
Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839
Pigeons exhibit contextual cueing to both simple and complex backgrounds.
Wasserman, Edward A; Teng, Yuejia; Castro, Leyre
2014-05-01
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of this contextual cueing effect using a novel Cueing-Miscueing design. Pigeons had to peck a target which could appear in one of four possible locations on four possible color backgrounds or four possible color photographs of real-world scenes. On 80% of the trials, each of the contexts was uniquely paired with one of the target locations; on the other 20% of the trials, each of the contexts was randomly paired with the remaining target locations. Pigeons came to exhibit robust contextual cueing when the context preceded the target by 2s, with reaction times to the target being shorter on correctly-cued trials than on incorrectly-cued trials. Contextual cueing proved to be more robust with photographic backgrounds than with uniformly colored backgrounds. In addition, during the context-target delay, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. These findings confirm the effectiveness of animal models of contextual cueing and underscore the important part played by associative learning in producing the effect. This article is part of a Special Issue entitled: SQAB 2013: Contextual Con. Copyright © 2014 Elsevier B.V. All rights reserved.
Fourier-space combination of Planck and Herschel images
NASA Astrophysics Data System (ADS)
Abreu-Vicente, J.; Stutz, A.; Henning, Th.; Keto, E.; Ballesteros-Paredes, J.; Robitaille, T.
2017-08-01
Context. Herschel has revolutionized our ability to measure column densities (NH) and temperatures (T) of molecular clouds thanks to its far infrared multiwavelength coverage. However, the lack of a well defined background intensity level in the Herschel data limits the accuracy of the NH and T maps. Aims: We aim to provide a method that corrects the missing Herschel background intensity levels using the Planck model for foreground Galactic thermal dust emission. For the Herschel/PACS data, both the constant-offset as well as the spatial dependence of the missing background must be addressed. For the Herschel/SPIRE data, the constant-offset correction has already been applied to the archival data so we are primarily concerned with the spatial dependence, which is most important at 250 μm. Methods: We present a Fourier method that combines the publicly available Planck model on large angular scales with the Herschel images on smaller angular scales. Results: We have applied our method to two regions spanning a range of Galactic environments: Perseus and the Galactic plane region around l = 11deg (HiGal-11). We post-processed the combined dust continuum emission images to generate column density and temperature maps. We compared these to previously adopted constant-offset corrections. We find significant differences (≳20%) over significant ( 15%) areas of the maps, at low column densities (NH ≲ 1022 cm-2) and relatively high temperatures (T ≳ 20 K). We have also applied our method to synthetic observations of a simulated molecular cloud to validate our method. Conclusions: Our method successfully corrects the Herschel images, including both the constant-offset intensity level and the scale-dependent background variations measured by Planck. Our method improves the previous constant-offset corrections, which did not account for variations in the background emission levels. The image FITS files used in this paper are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/604/A65
van Rooijen, Dominique C; van de Kamer, Jeroen B; Pool, René; Hulshof, Maarten CCM; Koning, Caro CE; Bel, Arjan
2009-01-01
Background The purpose of this study was to determine the dosimetric effect of on-line position correction for bladder tumor irradiation and to find methods to predict and handle this effect. Methods For 25 patients with unifocal bladder cancer intensity modulated radiotherapy (IMRT) with 5 beams was planned. The requirement for each plan was that 99% of the target volume received 95% of the prescribed dose. Tumor displacements from -2.0 cm to 2.0 cm in each dimension were simulated, using 0.5 cm increments, resulting in 729 simulations per patient. We assumed that on-line correction for the tumor was applied perfectly. We determined the correlation between the change in D99% and the change in path length, which is defined here as the distance from the skin to the isocenter for each beam. In addition the margin needed to avoid underdosage was determined and the probability that an underdosage occurs in a real treatment was calculated. Results Adjustments for tumor displacement with perfect on-line position correction resulted in an altered dose distribution. The altered fraction dose to the target varied from 91.9% to 100.4% of the prescribed dose. The mean D99% (± SD) was 95.8% ± 1.0%. There was a modest linear correlation between the difference in D99% and the change in path length of the beams after correction (R2 = 0.590). The median probability that a systematic underdosage occurs in a real treatment was 0.23% (range: 0 - 24.5%). A margin of 2 mm reduced that probability to < 0.001% in all patients. Conclusion On-line position correction does result in an altered target coverage, due to changes in average path length after position correction. An extra margin can be added to prevent underdosage. PMID:19775479
NASA Technical Reports Server (NTRS)
Dyer, C. S.; Trombka, J. I.; Metzger, A. E.; Seltzer, S. M.; Bielefeld, M. J.; Evans, L. G.
1975-01-01
Since the report of a preliminary analysis of cosmic gamma-ray measurements made during the Apollo 15 mission, an improved calculation of the spallation activation contribution has been made including the effects of short-lived spallation fragments, which can extend the correction to 15 MeV. In addition, a difference between Apollo 15 and 16 data enables an electron bremsstrahlung contribution to be calculated. A high level of activation observed in a crystal returned on Apollo 17 indicates a background contribution from secondary neutrons. These calculations and observations enable an improved extraction of spurious components and suggest important improvements for future detectors.
U(1){sub R} mediation from the flux compactification in six dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyun Min
We consider a supersymmetric completion of codimension-two branes with nonzero tension in a 6D gauged supergravity. As a consequence, we obtain the football solution with 4D Minkowski space as a new supersymmetric background that preserves 4D N = 1 SUSY. In the presence of brane multiplets, we derive the 4D effective supergravity action for the football background and show that the remaining modulus can be stabilized by a bulk non-perturbative correction with brane uplifting potentials at a zero vacuum energy. We find that the U(1){sub R} mediation can be a dominant source of SUSY breaking for a brane scalar withmore » nonzero R charge.« less
Surgical correction of pectus arcuatum
Ershova, Ksenia; Adamyan, Ruben
2016-01-01
Background Pectus arcuatum is a rear congenital chest wall deformity and methods of surgical correction are debatable. Methods Surgical correction of pectus arcuatum always includes one or more horizontal sternal osteotomies, resection of deformed rib cartilages and finally anterior chest wall stabilization. The study is approved by the institutional ethical committee and has obtained the informed consent from every patient. Results In this video we show our modification of pectus arcuatum correction with only partial sternal osteotomy and further stabilization by vertical parallel titanium plates. Conclusions Reported method is a feasible option for surgical correction of pectus arcuatum. PMID:29078483
77 FR 18914 - National Motor Vehicle Title Information System (NMVTIS): Technical Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-29
... 1121-AA79 National Motor Vehicle Title Information System (NMVTIS): Technical Corrections AGENCY... (OJP) is promulgating this direct final rule for its National Motor Vehicle Title Information System... INFORMATION CONTACT paragraph. II. Background The National Motor Vehicle Title Information System was...
Techniques for the correction of topographical effects in scanning Auger electron microscopy
NASA Technical Reports Server (NTRS)
Prutton, M.; Larson, L. A.; Poppa, H.
1983-01-01
A number of ratioing methods for correcting Auger images and linescans for topographical contrast are tested using anisotropically etched silicon substrates covered with Au or Ag. Thirteen well-defined angles of incidence are present on each polyhedron produced on the Si by this etching. If N1 electrons are counted at the energy of an Auger peak and N2 are counted in the background above the peak, then N1, N1 - N2, (N1 - N2)/(N1 + N2) are measured and compared as methods of eliminating topographical contrast. The latter method gives the best compensation but can be further improved by using a measurement of the sample absorption current. Various other improvements are discussed.
Blue spectra of Kalb-Ramond axions and fully anisotropic string cosmologies
NASA Astrophysics Data System (ADS)
Giovannini, Massimo
1999-03-01
The inhomogeneities associated with massless Kalb-Ramond axions can be amplified not only in isotropic (four-dimensional) string cosmological models but also in the fully anisotropic case. If the background geometry is isotropic, the axions (which are not part of the homogeneous background) develop outside the horizon, the growing modes leading, ultimately, to logarithmic energy spectra which are ``red'' in frequency and increase at large distance scales. We show that this conclusion can be avoided not only in the case of higher dimensional backgrounds with contracting internal dimensions but also in the case of string cosmological scenarios which are completely anisotropic in four dimensions. In this case the logarithmic energy spectra turn out to be ``blue'' in frequency and, consequently, decreasing at large distance scales. We elaborate on anisotropic dilaton-driven models and we argue that, incidentally, the background models leading to blue (or flat) logarithmic energy spectra for axionic fluctuations are likely to be isotropized by the effect of string tension corrections.
Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan
2016-01-01
Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989
40 CFR 1065.805 - Sampling system.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...
40 CFR 1065.805 - Sampling system.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...
40 CFR 1065.805 - Sampling system.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...
A critique of recent economic evaluations of community water fluoridation
Ko, Lee; Thiessen, Kathleen M
2015-01-01
Background: Although community water fluoridation (CWF) results in a range of potential contaminant exposures, little attention has been given to many of the possible impacts. A central argument for CWF is its cost-effectiveness. The U.S. Government states that $1 spent on CWF saves $38 in dental treatment costs. Objective: To examine the reported cost-effectiveness of CWF. Methods: Methods and underlying data from the primary U.S. economic evaluation of CWF are analyzed and corrected calculations are described. Other recent economic evaluations are also examined. Results: Recent economic evaluations of CWF contain defective estimations of both costs and benefits. Incorrect handling of dental treatment costs and flawed estimates of effectiveness lead to overestimated benefits. The real-world costs to water treatment plants and communities are not reflected. Conclusions: Minimal correction reduced the savings to $3 per person per year (PPPY) for a best-case scenario, but this savings is eliminated by the estimated cost of treating dental fluorosis. PMID:25471729
The effect of finite field size on classification and atmospheric correction
NASA Technical Reports Server (NTRS)
Kaufman, Y. J.; Fraser, R. S.
1981-01-01
The atmospheric effect on the upward radiance of sunlight scattered from the Earth-atmosphere system is strongly influenced by the contrasts between fields and their sizes. For a given atmospheric turbidity, the atmospheric effect on classification of surface features is much stronger for nonuniform surfaces than for uniform surfaces. Therefore, the classification accuracy of agricultural fields and urban areas is dependent not only on the optical characteristics of the atmosphere, but also on the size of the surface do not account for the nonuniformity of the surface have only a slight effect on the classification accuracy; in other cases the classification accuracy descreases. The radiances above finite fields were computed to simulate radiances measured by a satellite. A simulation case including 11 agricultural fields and four natural fields (water, soil, savanah, and forest) was used to test the effect of the size of the background reflectance and the optical thickness of the atmosphere on classification accuracy. It is concluded that new atmospheric correction methods, which take into account the finite size of the fields, have to be developed to improve significantly the classification accuracy.
Statistical nature of infrared dynamics on de Sitter background
NASA Astrophysics Data System (ADS)
Tokuda, Junsei; Tanaka, Takahiro
2018-02-01
In this study, we formulate a systematic way of deriving an effective equation of motion(EoM) for long wavelength modes of a massless scalar field with a general potential V(phi) on de Sitter background, and investigate whether or not the effective EoM can be described as a classical stochastic process. Our formulation gives an extension of the usual stochastic formalism to including sub-leading secular growth coming from the nonlinearity of short wavelength modes. Applying our formalism to λ phi4 theory, we explicitly derive an effective EoM which correctly recovers the next-to-leading secularly growing part at a late time, and show that this effective EoM can be seen as a classical stochastic process. Our extended stochastic formalism can describe all secularly growing terms which appear in all correlation functions with a specific operator ordering. The restriction of the operator ordering will not be a big drawback because the commutator of a light scalar field becomes negligible at large scales owing to the squeezing.
Spotting effect in microarray experiments
Mary-Huard, Tristan; Daudin, Jean-Jacques; Robin, Stéphane; Bitton, Frédérique; Cabannes, Eric; Hilson, Pierre
2004-01-01
Background Microarray data must be normalized because they suffer from multiple biases. We have identified a source of spatial experimental variability that significantly affects data obtained with Cy3/Cy5 spotted glass arrays. It yields a periodic pattern altering both signal (Cy3/Cy5 ratio) and intensity across the array. Results Using the variogram, a geostatistical tool, we characterized the observed variability, called here the spotting effect because it most probably arises during steps in the array printing procedure. Conclusions The spotting effect is not appropriately corrected by current normalization methods, even by those addressing spatial variability. Importantly, the spotting effect may alter differential and clustering analysis. PMID:15151695
PET attenuation correction for rigid MR Tx/Rx coils from 176Lu background activity
NASA Astrophysics Data System (ADS)
Lerche, Christoph W.; Kaltsas, Theodoris; Caldeira, Liliana; Scheins, Jürgen; Rota Kops, Elena; Tellmann, Lutz; Pietrzyk, Uwe; Herzog, Hans; Shah, N. Jon
2018-02-01
One challenge for PET-MR hybrid imaging is the correction for attenuation of the 511 keV annihilation radiation by the required RF transmit and/or RF receive coils. Although there are strategies for building PET transparent Tx/Rx coils, such optimised coils still cause significant attenuation of the annihilation radiation leading to artefacts and biases in the reconstructed activity concentrations. We present a straightforward method to measure the attenuation of Tx/Rx coils in simultaneous MR-PET imaging based on the natural 176Lu background contained in the scintillator of the PET detector without the requirement of an external CT scanner or PET scanner with transmission source. The method was evaluated on a prototype 3T MR-BrainPET produced by Siemens Healthcare GmbH, both with phantom studies and with true emission images from patient/volunteer examinations. Furthermore, the count rate stability of the PET scanner and the x-ray properties of the Tx/Rx head coil were investigated. Even without energy extrapolation from the two dominant γ energies of 176Lu to 511 keV, the presented method for attenuation correction, based on the measurement of 176Lu background attenuation, shows slightly better performance than the coil attenuation correction currently used. The coil attenuation correction currently used is based on an external transmission scan with rotating 68Ge sources acquired on a Siemens ECAT HR + PET scanner. However, the main advantage of the presented approach is its straightforwardness and ready availability without the need for additional accessories.
Cost-effective Diagnostic Checklists for Meningitis in Resource Limited Settings
Durski, Kara N.; Kuntz, Karen M.; Yasukawa, Kosuke; Virnig, Beth A.; Meya, David B.; Boulware, David R.
2013-01-01
Background Checklists can standardize patient care, reduce errors, and improve health outcomes. For meningitis in resource-limited settings, with high patient loads and limited financial resources, CNS diagnostic algorithms may be useful to guide diagnosis and treatment. However, the cost-effectiveness of such algorithms is unknown. Methods We used decision analysis methodology to evaluate the costs, diagnostic yield, and cost-effectiveness of diagnostic strategies for adults with suspected meningitis in resource limited settings with moderate/high HIV prevalence. We considered three strategies: 1) comprehensive “shotgun” approach of utilizing all routine tests; 2) “stepwise” strategy with tests performed in a specific order with additional TB diagnostics; 3) “minimalist” strategy of sequential ordering of high-yield tests only. Each strategy resulted in one of four meningitis diagnoses: bacterial (4%), cryptococcal (59%), TB (8%), or other (aseptic) meningitis (29%). In model development, we utilized prevalence data from two Ugandan sites and published data on test performance. We validated the strategies with data from Malawi, South Africa, and Zimbabwe. Results The current comprehensive testing strategy resulted in 93.3% correct meningitis diagnoses costing $32.00/patient. A stepwise strategy had 93.8% correct diagnoses costing an average of $9.72/patient, and a minimalist strategy had 91.1% correct diagnoses costing an average of $6.17/patient. The incremental cost effectiveness ratio was $133 per additional correct diagnosis for the stepwise over minimalist strategy. Conclusions Through strategically choosing the order and type of testing coupled with disease prevalence rates, algorithms can deliver more care more efficiently. The algorithms presented herein are generalizable to East Africa and Southern Africa. PMID:23466647
Completely automated open-path FT-IR spectrometry.
Griffiths, Peter R; Shao, Limin; Leytem, April B
2009-01-01
Atmospheric analysis by open-path Fourier-transform infrared (OP/FT-IR) spectrometry has been possible for over two decades but has not been widely used because of the limitations of the software of commercial instruments. In this paper, we describe the current state-of-the-art of the hardware and software that constitutes a contemporary OP/FT-IR spectrometer. We then describe advances that have been made in our laboratory that have enabled many of the limitations of this type of instrument to be overcome. These include not having to acquire a single-beam background spectrum that compensates for absorption features in the spectra of atmospheric water vapor and carbon dioxide. Instead, an easily measured "short path-length" background spectrum is used for calculation of each absorbance spectrum that is measured over a long path-length. To accomplish this goal, the algorithm used to calculate the concentrations of trace atmospheric molecules was changed from classical least-squares regression (CLS) to partial least-squares regression (PLS). For calibration, OP/FT-IR spectra are measured in pristine air over a wide variety of path-lengths, temperatures, and humidities, ratioed against a short-path background, and converted to absorbance; the reference spectrum of each analyte is then multiplied by randomly selected coefficients and added to these background spectra. Automatic baseline correction for small molecules with resolved rotational fine structure, such as ammonia and methane, is effected using wavelet transforms. A novel method of correcting for the effect of the nonlinear response of mercury cadmium telluride detectors is also incorporated. Finally, target factor analysis may be used to detect the onset of a given pollutant when its concentration exceeds a certain threshold. In this way, the concentration of atmospheric species has been obtained from OP/FT-IR spectra measured at intervals of 1 min over a period of many hours with no operator intervention.
On the proper use of the reduced speed of light approximation
Gnedin, Nickolay Y.
2016-12-07
I show that the Reduced Speed of Light (RSL) approximation, when used properly (i.e. as originally designed - only for the local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the "Cosmic Reionization On Computers" (CROC) project are insensitive to the adopted value of the reduced speed of light for as long as that value does not fall below about 10% of the true speed of light. Here, a recent claim of the failure of the RSL approximation in the Illustris reionization model appearsmore » to be due to the effective speed of light being reduced in the equation for the cosmic background too, and, hence, illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less
On the proper use of the reduced speed of light approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y.
I show that the Reduced Speed of Light (RSL) approximation, when used properly (i.e. as originally designed - only for the local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the "Cosmic Reionization On Computers" (CROC) project are insensitive to the adopted value of the reduced speed of light for as long as that value does not fall below about 10% of the true speed of light. Here, a recent claim of the failure of the RSL approximation in the Illustris reionization model appearsmore » to be due to the effective speed of light being reduced in the equation for the cosmic background too, and, hence, illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less
Quantitation of tumor uptake with molecular breast imaging.
Bache, Steven T; Kappadath, S Cheenu
2017-09-01
We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.
Jaeger, Christian; Hemmann, Felix
2014-01-01
Elimination of Artifacts in NMR SpectroscopY (EASY) is a simple but very effective tool to remove simultaneously any real NMR probe background signal, any spectral distortions due to deadtime ringdown effects and -specifically- severe acoustic ringing artifacts in NMR spectra of low-gamma nuclei. EASY enables and maintains quantitative NMR (qNMR) as only a single pulse (preferably 90°) is used for data acquisition. After the acquisition of the first scan (it contains the wanted NMR signal and the background/deadtime/ringing artifacts) the same experiment is repeated immediately afterwards before the T1 waiting delay. This second scan contains only the background/deadtime/ringing parts. Hence, the simple difference of both yields clean NMR line shapes free of artefacts. In this Part I various examples for complete (1)H, (11)B, (13)C, (19)F probe background removal due to construction parts of the NMR probes are presented. Furthermore, (25)Mg EASY of Mg(OH)2 is presented and this example shows how extremely strong acoustic ringing can be suppressed (more than a factor of 200) such that phase and baseline correction for spectra acquired with a single pulse is no longer a problem. EASY is also a step towards deadtime-free data acquisition as these effects are also canceled completely. EASY can be combined with any other NMR experiment, including 2D NMR, if baseline distortions are a big problem. © 2013 Published by Elsevier Inc.
WE-AB-204-10: Evaluation of a Novel Dedicated Breast PET System (Mammi-PET)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Z; Swanson, T; O’Connor, M
2015-06-15
Purpose: To evaluate the performance characteristics of a novel dedicated breast PET system (Mammi-PET, Oncovision). The system has 2 detector rings giving axial/transaxial field of view of 8/17 cm. Each ring consists of 12 monolithic LYSO modules coupled to PSPMTs. Methods: Uniformity, sensitivity, energy and spatial resolution were measured according to NEMA standards. Count rate performance was investigated using a source of F-18 (1384uCi) decayed over 5 half-lives. A prototype PET phantom was imaged for 20 min to evaluate image quality, recovery coefficients and partial volume effects. Under an IRB-approved protocol, 11 patients who just underwent whole body PET/CT examsmore » were imaged prone with the breast pendulant at 5–10 minutes/breast. Image quality was assessed with and without scatter/attenuation correction and using different reconstruction algorithms. Results: Integral/differential uniformity were 9.8%/6.0% respectively. System sensitivity was 2.3% on axis, 2.2% and 2.8% at 3.8 cm and 7.8 cm off-axis. Mean energy resolution of all modules was 23.3%. Spatial resolution (FWHM) was 1.82 mm and 2.90 mm on axis and 5.8 cm off axis. Three cylinders (14 mm diameter) in the PET phantom were filled with activity concentration ratios of 4:1, 3:1, and 2:1 relative to the background. Measured cylinder to background ratios were 2.6, 1.8 and 1.5 (without corrections) and 3.6, 2.3 and 1.5 (with attenuation/scatter correction). Five cylinders (14, 10, 6, 4 and 2 mm diameter) each with an activity ratio of 4:1 were measured and showed recovery coefficients of 1, 0.66, 0.45, 0.18 and 0.18 (without corrections), and 1, 0.53, 0.30, 0.13 and 0 (with attenuation/scatter correction). Optimal phantom image quality was obtained with 3D MLEM algorithm, >20 iterations and without attenuation/scatter correction. Conclusion: The MAMMI system demonstrated good performance characteristics. Further work is needed to determine the optimal reconstruction parameters for qualitative and quantitative applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro
We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.
Generalized Doppler and aberration kernel for frequency-dependent cosmological observables
NASA Astrophysics Data System (ADS)
Yasini, Siavash; Pierpaoli, Elena
2017-11-01
We introduce a frequency-dependent Doppler and aberration transformation kernel for the harmonic multipoles of a general cosmological observable with spin weight s , Doppler weight d and arbitrary frequency spectrum. In the context of cosmic microwave background (CMB) studies, the frequency-dependent formalism allows to correct for the motion-induced aberration and Doppler effects on individual frequency maps with different masks. It also permits to deboost background radiations with non-blackbody frequency spectra, like extragalactic foregrounds and CMB spectra with primordial spectral distortions. The formalism can also be used to correct individual E and B polarization modes and account for motion-induced E/B mixing of polarized observables with d ≠1 at different frequencies. We apply the generalized aberration kernel on polarized and unpolarized specific intensity at 100 and 217 GHz and show that the motion-induced effects typically increase with the frequency of observation. In all-sky CMB experiments, the frequency-dependence of the motion-induced effects for a blackbody spectrum are overall negligible. However in a cut-sky analysis, ignoring the frequency dependence can lead to percent level error in the polarized and unpolarized power spectra over all angular scales. In the specific cut-sky used in our analysis (b >4 5 ° ,fsky≃14 % ), and for the dipole-inferred velocity β =0.00123 typically attributed to our peculiar motion, the Doppler and aberration effects can change polarized and unpolarized power spectra of specific intensity in the CMB rest frame by 1 - 2 % , but we find the polarization cross-leakage between E and B modes to be negligible.
2013-01-01
Background High resolution melting analysis (HRM) is a rapid and cost-effective technique for the characterisation of PCR amplicons. Because the reverse genetics of segmented influenza A viruses allows the generation of numerous influenza A virus reassortants within a short time, methods for the rapid selection of the correct recombinants are very useful. Methods PCR primer pairs covering the single nucleotide polymorphism (SNP) positions of two different influenza A H5N1 strains were designed. Reassortants of the two different H5N1 isolates were used as a model to prove the suitability of HRM for the selection of the correct recombinants. Furthermore, two different cycler instruments were compared. Results Both cycler instruments generated comparable average melting peaks, which allowed the easy identification and selection of the correct cloned segments or reassorted viruses. Conclusions HRM is a highly suitable method for the rapid and precise characterisation of cloned influenza A genomes. PMID:24028349
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiller, Britta
Higher order QCD corrections to W and Z boson production do not only manifest themselves in the generation of high transverse momenta of the weak bosons, but these QCD effects become directly visible in the production of jets in association with the weak bosons. Studying these processes is not only interesting from the perspective of testing perturbative QCD, but also to constrain a major background to many Standard Model (SM) of non-SM physics signals, e.g. top pair and single top production, searchers for the Higgs boson, leptoquarks and supersymmetric particles. This thesis describes a measurement of Z/γ* + jets production in pmore » $$\\bar{p}$$ collisions at √s = 1.96 TeV in the decay channel Z/γ* → μ +μ -. An integrated luminosity of L≈ 1fb -1 collected by the D0 detector at the Tevatron between August 2002 and February 2006 has been used. Differential production cross sections as function of the transverse energy of the first, second and third leading jet are measured. The distributions are corrected for acceptance and migration effects back to hadron level using an iterative unfolding method. Comparison of the measured cross sections to event generators, which include part of the higher order corrections are presented.« less
Collaboration enhances later individual memory for emotional material.
Bärthel, Gwennis A; Wessel, Ineke; Huntjens, Rafaële J C; Verwoerd, Johan
2017-05-01
Research on collaborative remembering suggests that collaboration hampers group memory (i.e., collaborative inhibition), yet enhances later individual memory. Studies examining collaborative effects on memory for emotional stimuli are scarce, especially concerning later individual memory. In the present study, female undergraduates watched an emotional movie and recalled it either collaboratively (n = 60) or individually (n = 60), followed by an individual free recall test and a recognition test. We replicated the standard collaborative inhibition effect. Further, in line with the literature, the collaborative condition displayed better post-collaborative individual memory. More importantly, in post-collaborative free recall, the centrality of the information to the movie plot did not play an important role. Recognition rendered slightly different results. Although collaboration rendered more correct recognition for more central details, it did not enhance recognition of background details. Secondly, the collaborative and individual conditions did not differ with respect to overlap of unique correct items in free recall. Yet, during recognition former collaborators more unanimously endorsed correct answers, as well as errors. Finally, extraversion, neuroticism, social anxiety, and depressive symptoms did not moderate the influence of collaboration on memory. Implications for the fields of forensic and clinical psychology are discussed.
Atmospheric turbulence compensation with laser phase shifting interferometry
NASA Astrophysics Data System (ADS)
Rabien, S.; Eisenhauer, F.; Genzel, R.; Davies, R. I.; Ott, T.
2006-04-01
Laser guide stars with adaptive optics allow astronomical image correction in the absence of a natural guide star. Single guide star systems with a star created in the earth's sodium layer can be used to correct the wavefront in the near infrared spectral regime for 8-m class telescopes. For possible future telescopes of larger sizes, or for correction at shorter wavelengths, the use of a single guide star is ultimately limited by focal anisoplanatism that arises from the finite height of the guide star. To overcome this limitation we propose to overlap coherently pulsed laser beams that are expanded over the full aperture of the telescope, traveling upwards along the same path which light from the astronomical object travels downwards. Imaging the scattered light from the resultant interference pattern with a camera gated to a certain height above the telescope, and using phase shifting interferometry we have found a method to retrieve the local wavefront gradients. By sensing the backscattered light from two different heights, one can fully remove the cone effect, which can otherwise be a serious handicap to the use of laser guide stars at shorter wavelengths or on larger telescopes. Using two laser beams multiconjugate correction is possible, resulting in larger corrected fields. With a proper choice of laser, wavefront correction could be expanded to the visible regime and, due to the lack of a cone effect, the method is applicable to any size of telescope. Finally the position of the laser spot could be imaged from the side of the main telescope against a bright background star to retrieve tip-tilt information, which would greatly improve the sky coverage of the system.
Meta-analysis of alcohol price and income elasticities – with corrections for publication bias
2013-01-01
Background This paper contributes to the evidence-base on prices and alcohol use by presenting meta-analytic summaries of price and income elasticities for alcohol beverages. The analysis improves on previous meta-analyses by correcting for outliers and publication bias. Methods Adjusting for outliers is important to avoid assigning too much weight to studies with very small standard errors or large effect sizes. Trimmed samples are used for this purpose. Correcting for publication bias is important to avoid giving too much weight to studies that reflect selection by investigators or others involved with publication processes. Cumulative meta-analysis is proposed as a method to avoid or reduce publication bias, resulting in more robust estimates. The literature search obtained 182 primary studies for aggregate alcohol consumption, which exceeds the database used in previous reviews and meta-analyses. Results For individual beverages, corrected price elasticities are smaller (less elastic) by 28-29 percent compared with consensus averages frequently used for alcohol beverages. The average price and income elasticities are: beer, -0.30 and 0.50; wine, -0.45 and 1.00; and spirits, -0.55 and 1.00. For total alcohol, the price elasticity is -0.50 and the income elasticity is 0.60. Conclusions These new results imply that attempts to reduce alcohol consumption through price or tax increases will be less effective or more costly than previously claimed. PMID:23883547
Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin
2016-10-01
This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL, UHL, or NH (Experiment 2) performed consonant identification and word and sentence recognition in background noise. Percentage correct performance and verbal response time (VRT) were assessed (onset time, total duration). In general, speech recognition improved as signal-to-noise ratio (SNR) increased both for children with NH and children with MBHL or UHL. The groups did not differ on measures of VRT. Onset times were longer for incorrect than for correct responses. For correct responses only, there was a general increase in VRT with decreasing SNR. Findings indicate poorer sentence recognition in children with NH and MBHL or UHL as SNR decreases. VRT results suggest that greater effort was expended when processing stimuli that were incorrectly identified. Increasing VRT with decreasing SNR for correct responses also supports greater effort in poorer acoustic conditions. The absence of significant hearing status differences suggests that VRT was not differentially affected by MBHL, UHL, or NH for children in this study.
CMB weak-lensing beyond the Born approximation: a numerical approach
NASA Astrophysics Data System (ADS)
Fabbian, Giulio; Calabrese, Matteo; Carbone, Carmelita
2018-02-01
We perform a complete study of the gravitational lensing effect beyond the Born approximation on the Cosmic Microwave Background (CMB) anisotropies using a multiple-lens raytracing technique through cosmological N-body simulations of the DEMNUni suite. The impact of second-order effects accounting for the non-linear evolution of large-scale structures is evaluated propagating for the first time the full CMB lensing jacobian together with the light rays trajectories. We carefully investigate the robustness of our approach against several numerical effects in the raytracing procedure and in the N-body simulation itself, and find no evidence of large contaminations. We discuss the impact of beyond-Born corrections on lensed CMB observables, and compare our results with recent analytical predictions that appeared in the literature, finding a good agreement, and extend these results to smaller angular scales. We measure the gravitationally-induced CMB polarization rotation that appears in the geodesic equation at second order, and compare this result with the latest analytical predictions. We then present the detection prospect of beyond-Born effects with the future CMB-S4 experiment. We show that corrections to the temperature power spectrum can be measured only if a good control of the extragalactic foregrounds is achieved. Conversely, the beyond-Born corrections on E and B-modes power spectra will be much more difficult to detect.
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards
Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Lægreid, Astrid
2007-01-01
Background The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. Results We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. Conclusion The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish. PMID:17949480
Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...
2016-03-03
Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville
Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less
Attentiveness of pediatricians to primary immunodeficiency disorders
2012-01-01
Background Primary immunodeficiency (PID) is a cluster of serious disorders that requires special alertness on the part of the medical staff for prompt diagnosis and management of the patient. This study explored PID knowledge and experience among pediatricians of wide educational backgrounds, practicing in the United Arab Emirates (UAE). Method A self-administered questionnaire was used to determine the competency of pediatricians in their knowledge of PID disorders. This study questionnaire included questions on PID signs and symptoms, syndromes associated with immunodeficiency, screening tests, interpreting laboratory tests and case management. The participants were 263 pediatricians of diverse education working in the 27 governmental hospitals in all regions of UAE. Results The overall performance of the pediatricians did not differ based on their age, gender, origin of certification, rank, or years of experience. Of the 50 questions, 20% of pediatricians answered correctly <60% of the questions, 76% answered correctly 60 to 79% of the questions, and 4% answered correctly ≥80% of the questions. Seventeen of the 19 PID signs and symptoms were identified by 55 to 97%. Four of 5 syndromes associated with immunodeficiency were identified by 50 to 90%. Appropriate screening tests were chosen by 64 to 96%. Attention to the laboratory reference range values as function of patient age was notably limited. Conclusions There was a noteworthy deficiency in PID work-up. Therefore, implementing effective educational strategies is needed to improve the competency of pediatricians to diagnose and manage PID disorders. PMID:22846098
Barkovskaya, M Sh; Bogomolov, A G; Knauer, N Yu; Rubtsov, N B; Kozlov, V A
2017-04-01
Telomere length is an important indicator of proliferative cell history and potential. Decreasing telomere length in the cells of an immune system can indicate immune aging in immune-mediated and chronic inflammatory diseases. Quantitative fluorescent in situ hybridization (Q-FISH) of a labeled (C 3 TA[Formula: see text] peptide nucleic acid probe onto fixed metaphase cells followed by digital image microscopy allows the evaluation of telomere length in the arms of individual chromosomes. Computer-assisted analysis of microscopic images can provide quantitative information on the number of telomeric repeats in individual telomeres. We developed new software to estimate telomere length. The MeTeLen software contains new options that can be used to solve some Q-FISH and microscopy problems, including correction of irregular light effects and elimination of background fluorescence. The identification and description of chromosomes and chromosome regions are essential to the Q-FISH technique. To improve the quality of cytogenetic analysis after Q-FISH, we optimized the temperature and time of DNA-denaturation to get better DAPI-banding of metaphase chromosomes. MeTeLen was tested by comparing telomere length estimations for sister chromatids, background fluorescence estimations, and correction of nonuniform light effects. The application of the developed software for analysis of telomere length in patients with rheumatoid arthritis was demonstrated.
Ship Effect Neutron Measurements And Impacts On Low-Background Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguayo Navarrete, Estanislao; Kouzes, Richard T.; Siciliano, Edward R.
2013-10-01
The primary particles entering the upper atmosphere as cosmic rays create showers in the atmosphere that include a broad spectrum of secondary neutrons, muons and protons. These cosmic-ray secondaries interact with materials at the surface of the Earth, yielding prompt backgrounds in radiation detection systems, as well as inducing long-lived activities through spallation events, dominated by the higher-energy neutron secondaries. For historical reasons, the multiple neutrons produced in spallation cascade events are referred to as “ship effect” neutrons. Quantifying the background from cosmic ray induced activities is important to low-background experiments, such as neutrino-less double beta decay. Since direct measurementsmore » of the effects of shielding on the cosmic-ray neutron spectrum are not available, Monte Carlo modeling is used to compute such effects. However, there are large uncertainties (orders of magnitude) in the possible cross-section libraries and the cosmic-ray neutron spectrum for the energy range needed in such calculations. The measurements reported here were initiated to validate results from Monte Carlo models through experimental measurements in order to provide some confidence in the model results. The results indicate that the models provide the correct trends of neutron production with increasing density, but there is substantial disagreement between the model and experimental results for the lower-density materials of Al, Fe and Cu.« less
Cosmic Strings Stabilized by Quantum Fluctuations
NASA Astrophysics Data System (ADS)
Weigel, H.
2017-03-01
Fermion quantum corrections to the energy of cosmic strings are computed. A number of rather technical tools are needed to formulate this correction, and isospin and gauge invariance are employed to verify consistency of these tools. These corrections must also be included when computing the energy of strings that are charged by populating fermion bound states in its background. It is found that charged strings are dynamically stabilized in theories similar to the standard model of particle physics.
Observational constraints on loop quantum cosmology.
Bojowald, Martin; Calcagni, Gianluca; Tsujikawa, Shinji
2011-11-18
In the inflationary scenario of loop quantum cosmology in the presence of inverse-volume corrections, we give analytic formulas for the power spectra of scalar and tensor perturbations convenient to compare with observations. Since inverse-volume corrections can provide strong contributions to the running spectral indices, inclusion of terms higher than the second-order runnings in the power spectra is crucially important. Using the recent data of cosmic microwave background and other cosmological experiments, we place bounds on the quantum corrections.
ERIC Educational Resources Information Center
Swank, Jacqueline M.; Gagnon, Joseph C.
2017-01-01
Background: Mental health screening and assessment is crucial within juvenile correctional facilities (JC). However, limited information is available about the current screening and assessment procedures specifically within JC. Objective: The purpose of the current study was to obtain information about the mental health screening and assessment…
NASA Technical Reports Server (NTRS)
Mullally, Fergal
2017-01-01
We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.
Quantum gravitational contributions to the cosmic microwave background anisotropy spectrum.
Kiefer, Claus; Krämer, Manuel
2012-01-13
We derive the primordial power spectrum of density fluctuations in the framework of quantum cosmology. For this purpose we perform a Born-Oppenheimer approximation to the Wheeler-DeWitt equation for an inflationary universe with a scalar field. In this way, we first recover the scale-invariant power spectrum that is found as an approximation in the simplest inflationary models. We then obtain quantum gravitational corrections to this spectrum and discuss whether they lead to measurable signatures in the cosmic microwave background anisotropy spectrum. The nonobservation so far of such corrections translates into an upper bound on the energy scale of inflation.
Self-force correction to geodetic spin precession in Kerr spacetime
NASA Astrophysics Data System (ADS)
Akcay, Sarp
2017-08-01
We present an expression for the gravitational self-force correction to the geodetic spin precession of a spinning compact object with small, but non-negligible mass in a bound, equatorial orbit around a Kerr black hole. We consider only conservative backreaction effects due to the mass of the compact object (m1), thus neglecting the effects of its spin s1 on its motion; i.e., we impose s1≪G m12/c and m1≪m2, where m2 is the mass parameter of the background Kerr spacetime. We encapsulate the correction to the spin precession in ψ , the ratio of the accumulated spin-precession angle to the total azimuthal angle over one radial orbit in the equatorial plane. Our formulation considers the gauge-invariant O (m1) part of the correction to ψ , denoted by Δ ψ , and is a generalization of the results of Akcay et al. [Classical Quantum Gravity 34, 084001 (2017), 10.1088/1361-6382/aa61d6] to Kerr spacetime. Additionally, we compute the zero-eccentricity limit of Δ ψ and show that this quantity differs from the circular orbit Δ ψcirc by a gauge-invariant quantity containing the gravitational self-force correction to general relativistic periapsis advance in Kerr spacetime. Our result for Δ ψ is expressed in a manner that readily accommodates numerical/analytical self-force computations, e.g., in the radiation gauge, and paves the way for the computation of a new eccentric-orbit Kerr gauge invariant beyond the generalized redshift.
49 CFR Appendix E to Part 227 - Use of Insert Earphones for Audiometric Testing
Code of Federal Regulations, 2010 CFR
2010-10-01
... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. E Appendix.... B. Technicians who conduct audiometric tests must be trained to insert the earphones correctly into... audiometer. IV. Background Noise Levels Testing shall be conducted in a room where the background ambient...
Analytic Scattering and Refraction Models for Exoplanet Transit Spectra
NASA Astrophysics Data System (ADS)
Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.
2017-12-01
Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.
Tokamak-independent software analysis suite for multi-spectral line-polarization MSE diagnostics
Scott, S. D.; Mumgaard, R. T.
2016-07-20
A tokamak-independent analysis suite has been developed to process data from Motional Stark Effect (mse) diagnostics. The software supports multi-spectral line-polarization mse diagnostics which simultaneously measure emission at the mse σ and π lines as well as at two "background" wavelengths that are displaced from the mse spectrum by a few nanometers. This analysis accurately estimates the amplitude of partially polarized background light at the σ and π wavelengths even in situations where the background light changes rapidly in time and space, a distinct improvement over traditional "time-interpolation" background estimation. The signal amplitude at many frequencies is computed using amore » numerical-beat algorithm which allows the retardance of the mse photo-elastic modulators (pem's) to be monitored during routine operation. It also allows the use of summed intensities at multiple frequencies in the calculation of polarization direction, which increases the effective signal strength and reduces sensitivity to pem retardance drift. The software allows the polarization angles to be corrected for calibration drift using a system that illuminates the mse diagnostic with polarized light at four known polarization angles within ten seconds of a plasma discharge. As a result, the software suite is modular, parallelized, and portable to other facilities.« less
Tokamak-independent software analysis suite for multi-spectral line-polarization MSE diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, S. D.; Mumgaard, R. T.
A tokamak-independent analysis suite has been developed to process data from Motional Stark Effect (mse) diagnostics. The software supports multi-spectral line-polarization mse diagnostics which simultaneously measure emission at the mse σ and π lines as well as at two "background" wavelengths that are displaced from the mse spectrum by a few nanometers. This analysis accurately estimates the amplitude of partially polarized background light at the σ and π wavelengths even in situations where the background light changes rapidly in time and space, a distinct improvement over traditional "time-interpolation" background estimation. The signal amplitude at many frequencies is computed using amore » numerical-beat algorithm which allows the retardance of the mse photo-elastic modulators (pem's) to be monitored during routine operation. It also allows the use of summed intensities at multiple frequencies in the calculation of polarization direction, which increases the effective signal strength and reduces sensitivity to pem retardance drift. The software allows the polarization angles to be corrected for calibration drift using a system that illuminates the mse diagnostic with polarized light at four known polarization angles within ten seconds of a plasma discharge. As a result, the software suite is modular, parallelized, and portable to other facilities.« less
Kip, Michelle Ma; Schop, Annemarie; Stouten, Karlijn; Dekker, Soraya; Dinant, Geert-Jan; Koffijberg, Hendrik; Bindels, Patrick Je; IJzerman, Maarten J; Levin, Mark-David; Kusters, Ron
2018-01-01
Background Establishing the underlying cause of anaemia in general practice is a diagnostic challenge. Currently, general practitioners individually determine which laboratory tests to request (routine work-up) in order to diagnose the underlying cause. However, an extensive work-up (consisting of 14 tests) increases the proportion of patients correctly diagnosed. This study investigates the cost-effectiveness of this extensive work-up. Methods A decision-analytic model was developed, incorporating all societal costs from the moment a patient presents to a general practitioner with symptoms suggestive of anaemia (aged ≥ 50 years), until the patient was (correctly) diagnosed and treated in primary care, or referred to (and diagnosed in) secondary care. Model inputs were derived from an online survey among general practitioners, expert estimates and published data. The primary outcome measure was expressed as incremental cost per additional patient diagnosed with the correct underlying cause of anaemia in either work-up. Results The probability of general practitioners diagnosing the correct underlying cause increased from 49.6% (95% CI: 44.8% to 54.5%) in the routine work-up to 56.0% (95% CI: 51.2% to 60.8%) in the extensive work-up (i.e. +6.4% [95% CI: -0.6% to 13.1%]). Costs are expected to increase slightly from €842/patient (95% CI: €704 to €994) to €845/patient (95% CI: €711 to €994), i.e. +€3/patient (95% CI: €-35 to €40) in the extensive work-up, indicating incremental costs of €43 per additional patient correctly diagnosed. Conclusions The extensive laboratory work-up is more effective for diagnosing the underlying cause of anaemia by general practitioners, at a minimal increase in costs. As accompanying benefits in terms of quality of life and reduced productivity losses could not be captured in this analysis, the extensive work-up is likely cost-effective.
Imprint of thawing scalar fields on the large scale galaxy overdensity
NASA Astrophysics Data System (ADS)
Dinda, Bikash R.; Sen, Anjan A.
2018-04-01
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.
NASA Astrophysics Data System (ADS)
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Lühr, B; Scheller, J; Meyer, P; Kramer, W
1998-02-01
We have analysed the correction of defined mismatches in wild-type and msh2, msh3, msh6 and msh3 msh6 mutants of Saccharomyces cerevisiae in two different yeast strain backgrounds by transformation with plasmid heteroduplex DNA constructs. Ten different base/base mismatches, two single-nucleotide loops and a 38-nucleotide loop were tested. Repair of all types of mismatches was severely impaired in msh2 and msh3 msh6 mutants. In msh6 mutants, repair efficiency of most base/base mismatches was reduced to a similar extent as in msh3 msh6 double mutants. G/T and A/C mismatches, however, displayed residual repair in msh6 mutants in one strain background, implying a role for Msh3p in recognition of base/base mismatches. Furthermore, the efficiency of repair of base/base mismatches was considerably reduced in msh3 mutants in one strain background, indicating a requirement for MSH3 for fully efficient mismatch correction. Also the efficiency of repair of the 38-nucleotide loop was reduced in msh3 mutants, and to a lesser extent in msh6 mutants. The single-nucleotide loop with an unpaired A was less efficiently repaired in msh3 mutants and that with an unpaired T was less efficiently corrected in msh6 mutants, indicating non-redundant functions for the two proteins in the recognition of single-nucleotide loops.
Inverse Compton Scattering in Mildly Relativistic Plasma
NASA Technical Reports Server (NTRS)
Molnar, S. M.; Birkinshaw, M.
1998-01-01
We investigated the effect of inverse Compton scattering in mildly relativistic static and moving plasmas with low optical depth using Monte Carlo simulations, and calculated the Sunyaev-Zel'dovich effect in the cosmic background radiation. Our semi-analytic method is based on a separation of photon diffusion in frequency and real space. We use Monte Carlo simulation to derive the intensity and frequency of the scattered photons for a monochromatic incoming radiation. The outgoing spectrum is determined by integrating over the spectrum of the incoming radiation using the intensity to determine the correct weight. This method makes it possible to study the emerging radiation as a function of frequency and direction. As a first application we have studied the effects of finite optical depth and gas infall on the Sunyaev-Zel'dovich effect (not possible with the extended Kompaneets equation) and discuss the parameter range in which the Boltzmann equation and its expansions can be used. For high temperature clusters (k(sub B)T(sub e) greater than or approximately equal to 15 keV) relativistic corrections based on a fifth order expansion of the extended Kompaneets equation seriously underestimate the Sunyaev-Zel'dovich effect at high frequencies. The contribution from plasma infall is less important for reasonable velocities. We give a convenient analytical expression for the dependence of the cross-over frequency on temperature, optical depth, and gas infall speed. Optical depth effects are often more important than relativistic corrections, and should be taken into account for high-precision work, but are smaller than the typical kinematic effect from cluster radial velocities.
Finite temperature corrections to tachyon mass in intersecting D-branes
NASA Astrophysics Data System (ADS)
Sethi, Varun; Chowdhury, Sudipto Paul; Sarkar, Swarnendu
2017-04-01
We continue with the analysis of finite temperature corrections to the Tachyon mass in intersecting branes which was initiated in [1]. In this paper we extend the computation to the case of intersecting D3 branes by considering a setup of two intersecting branes in flat-space background. A holographic model dual to BCS superconductor consisting of intersecting D8 branes in D4 brane background was proposed in [2]. The background considered here is a simplified configuration of this dual model. We compute the one-loop Tachyon amplitude in the Yang-Mills approximation and show that the result is finite. Analyzing the amplitudes further we numerically compute the transition temperature at which the Tachyon becomes massless. The analytic expressions for the one-loop amplitudes obtained here reduce to those for intersecting D1 branes obtained in [1] as well as those for intersecting D2 branes.
A Voice Enabled Procedure Browser for the International Space Station
NASA Technical Reports Server (NTRS)
Rayner, Manny; Chatzichrisafis, Nikos; Hockey, Beth Ann; Farrell, Kim; Renders, Jean-Michel
2005-01-01
Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station (ISS), is to the best of our knowledge the first spoken dialog system in space. This paper gives background on the system and the ISS procedures, then discusses the research developed to address three key problems: grammar-based speech recognition using the Regulus toolkit; SVM based methods for open microphone speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations.
Daytime adaptive optics for deep space optical communications
NASA Technical Reports Server (NTRS)
Wilson, Keith; Troy, M.; Srinivasan, M.; Platt, B.; Vilnrotter, V.; Wright, M.; Garkanian, V.; Hemmati, H.
2003-01-01
The deep space optical communications subsystem offers a higher bandwidth communications link in smaller size, lower mass, and lower power consumption subsystem than does RF. To demonstrate the benefit of this technology to deep space communications NASA plans to launch an optical telecommunications package on the 2009 Mars Telecommunications orbiter spacecraft. Current performance goals are 30-Mbps from opposition, and 1-Mbps near conjunction (-3 degrees Sun-Earth-Probe angle). Yet, near conjunction the background noise from the day sky will degrade the performance of the optical link. Spectral and spatial filtering and higher modulation formats can mitigate the effects of background sky. Narrowband spectral filters can result in loss of link margin, and higher modulation formats require higher transmitted peak powers. In contrast, spatial filtering at the receiver has the potential of being lossless while providing the required sky background rejection. Adaptive optics techniques can correct wave front aberrations caused by atmospheric turbulence and enable near-diffraction-limited performance of the receiving telescope. Such performance facilitates spatial filtering, and allows the receiver field-of-view and hence the noise from the sky background to be reduced.
Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin
2015-02-01
There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.
The Industrial Energy Consumers of America (IECA) joins the U.S. Chamber of Commerce in its request for correction of information developed by the Environmental Protection Agency (EPA) in a background technical support document titled Greenhouse Gas Emissions Reporting from the Petroleum and Natural Gas Industry
Practitioner Review: Use of Antiepileptic Drugs in Children
ERIC Educational Resources Information Center
Guerrini, Renzo; Parmeggiani, Lucio
2006-01-01
Background: The aim in treating epilepsy is to minimise or control seizures with full respect of quality-of-life issues, especially of cognitive functions. Optimal treatment first demands a correct recognition of the major type of seizures, followed by a correct diagnosis of the type of epilepsy or of the specific syndrome. Methods: Review of data…
75 FR 44901 - Extended Carryback of Losses to or From a Consolidated Group; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-30
.... 7805 * * * 0 Par. 2. Section 1.1502-21T(b)(3)(v) is amended by revising paragraphs (B), (C)(1), (C)(2...: Grid Glyer, (202) 622-7930 (not a toll-free number). SUPPLEMENTARY INFORMATION: Background The final... in 26 CFR Part 1 Income taxes, Reporting and recordkeeping requirements. Correction of Publication 0...
ERIC Educational Resources Information Center
McCray, Erica D.; Ribuffo, Cecelia; Lane, Holly; Murphy, Kristin M.; Gagnon, Joseph C.; Houchins, David E.; Lambert, Richard G.
2018-01-01
Background: The well-documented statistics regarding the academic struggles of incarcerated youth are disconcerting, and efforts to improve reading performance among this population are greatly needed. There is a dearth of research that provides rich and detailed accounts of reading intervention implementation in the juvenile corrections setting.…
Lindhardt, T B; Hesse, B; Gadsbøll, N
1997-01-01
The purpose of this study was to determine the accuracy of determinations of left ventricular ejection fraction (LVEF) by a nonimaging miniature nuclear detector system (Cardioscint) and to evaluate the feasibility of long-term LVEF monitoring in patients admitted to the coronary care unit, with special reference to the blood-labeling technique. Cardioscint LVEF values were compared with measurements of LVEF by conventional gamma camera radionuclide ventriculography in 33 patients with a wide range of LVEF values. In 21 of the 33 patients, long-term monitoring was carried out for 1 to 4 hours (mean 186 minutes), with three different kits: one for in vivo and two for in vitro red blood cell labeling. The stability of the labeling was assessed by determination of the activity of blood samples taken during the first 24 hours after blood labeling. The agreement between Cardioscint LVEF and gamma camera LVEF was good with automatic background correction (r = 0.82; regression equation y = 1.04x + 3.88) but poor with manual background correction (r = 0.50; y = 0.88x - 0.55). The agreement was highest in patients without wall motion abnormalities. The long-term monitoring showed no difference between morning and afternoon Cardioscint LVEF values. Short-lasting fluctuations in LVEFs greater than 10 EF units were observed in the majority of the patients. After 24 hours, the mean reduction in the physical decay-corrected count rate of the blood samples was most pronounced for the two in vitro blood-labeling kits (57% +/- 9% and 41% +/- 3%) and less for the in vivo blood-labeling kit (32% +/- 26%). This "biologic decay" had a marked influence on the Cardioscint monitoring results, demanding frequent background correction. A fairly accurate estimate of LVEF can be obtained with the nonimaging Cardioscint system, and continuous bedside LVEF monitoring can proceed for hours with little inconvenience to the patients. Instability of the red blood cell labeling during long-term monitoring necessitates frequent background correction.
NASA Astrophysics Data System (ADS)
Jentzen, Walter
2010-04-01
The use of recovery coefficients (RCs) in 124I PET lesion imaging is a simple method to correct the imaged activity concentration (AC) primarily for the partial-volume effect and, to a minor extent, for the prompt gamma coincidence effect. The aim of this phantom study was to experimentally investigate a number of various factors affecting the 124I RCs. Three RC-based correction approaches were considered. These approaches differ with respect to the volume of interest (VOI) drawn, which determines the imaged AC and the RCs: a single voxel VOI containing the maximum value (maximum RC), a spherical VOI with a diameter of the scanner resolution (resolution RC) and a VOI equaling the physical object volume (isovolume RC). Measurements were performed using mainly a stand-alone PET scanner (EXACT HR+) and a latest-generation PET/CT scanner (BIOGRAPH mCT). The RCs were determined using a cylindrical phantom containing spheres or rotational ellipsoids and were derived from images acquired with a reference acquisition protocol. For each type of RC, the influence of the following factors on the RC was assessed: object shape, background activity spill in and iterative image reconstruction parameters. To evaluate the robustness of the RC-based correction approaches, the percentage deviation between RC-corrected and true ACs was determined from images acquired with a clinical acquisition protocol of different AC regimes. The observed results of the shape and spill-in effects were compared with simulation data derived from a convolution-based model. The study demonstrated that the shape effect was negligible and, therefore, was in agreement with theoretical expectations. In contradiction to the simulation results, the observed spill-in effect was unexpectedly small. To avoid variations in the determination of RCs due to reconstruction parameter changes, image reconstruction with a pixel length of about one-third or less of the scanner resolution and an OSEM 1 × 32 algorithm or one with somewhat higher number of effective iterations are recommended. Using the clinical acquisition protocol, the phantom study indicated that the resolution- or isovolume-based recovery-correction approaches appeared to be more appropriate to recover the ACs from patient data; however, the application of the three RC-based correction approaches to small lesions containing low ACs was, in particular, associated with large underestimations. The phantom study had several limitations, which were discussed in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn
Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less
Hua, Håkan; Emilsson, Magnus; Kähäri, Kim; Widén, Stephen; Möller, Claes; Lyxell, Björn
2014-10-01
Health care professionals frequently meet employees with hearing impairment (HI) who experience difficulties at work. There are indications that the majority of these difficulties might be related to the presence of background noise. Moreover, research has also shown that high-level noise has a more detrimental effect on cognitive performance and self-rated disturbance in individuals with HI than low-level noise. The purpose of this study was to examine the impact of different types of background noise on cognitive performance and perceived disturbance (PD) in employees with aided HI and normal hearing. A mixed factorial design was conducted to examine the effect of noise in four experimental conditions. A total of 40 participants (21 men and 19 women) were recruited to take part in the study. The study sample consisted of employees with HI (n = 20) and normal hearing (n = 20). The group with HI had a mild-moderate sensorineural HI, and they were all frequent hearing-aid users. The current study was conducted by using four general work-related tasks (mental arithmetic, orthographic decoding, phonological decoding, and serial recall) in four different background conditions: (1) quiet, (2) office noise at 56 dBA, (3) daycare noise at 73.5 dBA, and (4) traffic noise at 72.5 dBA. Reaction time and the proportion of correct answers in the working tasks were used as outcome measures of cognitive performance. The Borg CR-10 scale was used to assess PD. Data collection occurred on two separate sessions, completed within 4 wk of each other. All tasks and experimental conditions were used in a counterbalanced order. Two-way analysis of variance with repeated measures was performed to analyze the results. To examine interaction effects, pairwise t-tests were used. Pearson correlation coefficients between reaction time and proportion of correct answers, and cognitive performance and PD were also calculated to examine the possible correlation between the different variables. No significant between-group or within-group differences in cognitive performance were observed across the four background conditions. Ratings of PD showed that both groups rated PD according to noise level, where higher noise level generated a higher PD. The present findings also demonstrated that the group with HI was more disturbed by higher than lower levels of noise (i.e., traffic and daycare setting compared with office setting). This pattern was observed consistently throughout four working tasks where the group with HI reported a significantly greater PD in the daycare and traffic settings compared with office noise. The present results demonstrate that background noise does not impair cognitive performance in nonauditory tasks in employees with HI and normal hearing, but that PD is affected to a greater extent in employees with HI during higher levels of background noise exposure. In addition, this study also supports previous studies regarding the detrimental effects that high-level noise has on employees with HI. Therefore, we emphasize the need of both self-rated and cognitive measurements in hearing care and occupational health services for both employees with normal hearing and HI. American Academy of Audiology.
2010-01-01
Background The effectiveness of malaria chemoprophylaxis is limited by the lack of compliance whose determinants are not well known. Methods The compliance with malaria chemoprophylaxis has been estimated and analysed by validated questionnaires administered before and after the short-term missions (about four months) in five tropical African countries of 2,093 French soldiers from 19 military companies involved in a prospective cohort study. "Correct compliance" was defined as "no missed doses" of daily drug intake during the entire mission and was analysed using multiple mixed-effect logistic regression model. Results The averaged prevalence rate of correct compliance was 46.2%, ranging from 9.6%to 76.6% according to the companies. Incorrect compliance was significantly associated with eveningness (p = 0.028), a medical history of clinical malaria (p < 0.001) and a perceived mosquito attractiveness inferior or superior to the others (p < 0.007). Correct compliance was significantly associated with the systematic use of protective measures against mosquito bites (p < 0.001), the type of military operations (combat vs. training activities, p < 0.001) and other individual factors (p < 0.05). Conclusions The identification of circumstances and profiles of persons at higher risk of lack of compliance would pave the way to specifically targeted strategies aimed to improve compliance with malaria chemoprophylaxis and, therefore, its effectiveness. PMID:20128921
Consistency relations in effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munshi, Dipak; Regan, Donough, E-mail: D.Munshi@sussex.ac.uk, E-mail: D.Regan@sussex.ac.uk
The consistency relations in large scale structure relate the lower-order correlation functions with their higher-order counterparts. They are direct outcome of the underlying symmetries of a dynamical system and can be tested using data from future surveys such as Euclid. Using techniques from standard perturbation theory (SPT), previous studies of consistency relation have concentrated on continuity-momentum (Euler)-Poisson system of an ideal fluid. We investigate the consistency relations in effective field theory (EFT) which adjusts the SPT predictions to account for the departure from the ideal fluid description on small scales. We provide detailed results for the 3D density contrast δmore » as well as the scaled divergence of velocity θ-bar . Assuming a ΛCDM background cosmology, we find the correction to SPT results becomes important at k ∼> 0.05 h/Mpc and that the suppression from EFT to SPT results that scales as square of the wave number k , can reach 40% of the total at k ≈ 0.25 h/Mpc at z = 0. We have also investigated whether effective field theory corrections to models of primordial non-Gaussianity can alter the squeezed limit behaviour, finding the results to be rather insensitive to these counterterms. In addition, we present the EFT corrections to the squeezed limit of the bispectrum in redshift space which may be of interest for tests of theories of modified gravity.« less
Occupations at Case Closure for Vocational Rehabilitation Applicants with Criminal Backgrounds
ERIC Educational Resources Information Center
Whitfield, Harold Wayne
2009-01-01
The purpose of this study was to identify industries that hire persons with disabilities and criminal backgrounds. The researcher obtained data on 1,355 applicants for vocational rehabilitation services who were living in adult correctional facilities at the time of application. Service-based industries hired the most ex-inmates with disabilities…
Graviton propagator from background-independent quantum gravity.
Rovelli, Carlo
2006-10-13
We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.
A two-dimensional ACAR study of untwinned YBa2Cu3O(7-x)
NASA Astrophysics Data System (ADS)
Smedskjaer, L. C.; Bansil, A.
1991-12-01
We have carried out 2D-ACAR measurements on an untwinned single crystal of YBa2Cu3O(sub 7-x) as a function of temperature, for five temperatures ranging from 30K to 300K. We show that these temperature-dependent 2D-ACAR spectra can be described to a good approximation as a superposition of two temperature independent spectra with temperature-dependent weighting factors. We show further how the data can be used to correct for the 'background' in the experimental spectrum. Such a 'background corrected' spectrum is in remarkable accord with the corresponding band theory predictions, and displays, in particular, clear signatures of the electron ridge Fermi surface.
Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
Depending on detector number, there are random fluctuations in the background level for spectral band 1 of magnitudes ranging from 2 to 3.5 digital numbers (DN). Similar variability is observed in all the other reflective bands, but with smaller magnitude in the range 0.5 to 2.5 DN. Observations of background reference levels show that line dependent variations in raw TM image data and in the associated calibration data can be measured and corrected within an operational environment by applying simple offset corrections on a line-by-line basis. The radiometric calibration procedure defined by the Canadian Center for Remote Sensing was revised accordingly in order to prevent striping in the output product.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Enomoto, Yukiko; Yamauchi, Keita; Asano, Takahiko; Otani, Katharina; Iwama, Toru
2018-01-01
Background and purpose C-arm cone-beam computed tomography (CBCT) has the drawback that image quality is degraded by artifacts caused by implanted metal objects. We evaluated whether metal artifact reduction (MAR) prototype software can improve the subjective image quality of CBCT images of patients with intracranial aneurysms treated with coils or clips. Materials and methods Forty-four patients with intracranial aneurysms implanted with coils (40 patients) or clips (four patients) underwent one CBCT scan from which uncorrected and MAR-corrected CBCT image datasets were reconstructed. Three blinded readers evaluated the image quality of the image sets using a four-point scale (1: Excellent, 2: Good, 3: Poor, 4: Bad). The median scores of the three readers of uncorrected and MAR-corrected images were compared with the paired Wilcoxon signed-rank and inter-reader agreement of change scores was assessed by weighted kappa statistics. The readers also recorded new clinical findings, such as intracranial hemorrhage, air, or surrounding anatomical structures on MAR-corrected images. Results The image quality of MAR-corrected CBCT images was significantly improved compared with the uncorrected CBCT image ( p < 0.001). Additional clinical findings were seen on CBCT images of 70.4% of patients after MAR correction. Conclusion MAR software improved image quality of CBCT images degraded by metal artifacts.
Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code
NASA Astrophysics Data System (ADS)
Phillips, William; Russwurm, George M.
1999-02-01
This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.
van der Linde, H J; Van Deuren, B; Teisman, A; Towart, R; Gallacher, D J
2008-01-01
Background and purpose: Body core temperature (Tc) changes affect the QT interval, but correction for this has not been systematically investigated. It may be important to correct QT intervals for drug-induced changes in Tc. Experimental approach: Anaesthetized beagle dogs were artificially cooled (34.2 °C) or warmed (42.1 °C). The relationship between corrected QT intervals (QTcV; QT interval corrected according to the Van de Water formula) and Tc was analysed. This relationship was also examined in conscious dogs where Tc was increased by exercise. Key results: When QTcV intervals were plotted against changes in Tc, linear correlations were observed in all individual dogs. The slopes did not significantly differ between cooling (−14.85±2.08) or heating (−13.12±3.46) protocols. We propose a correction formula to compensate for the influence of Tc changes and standardize the QTcV duration to 37.5 °C: QTcVcT (QTcV corrected for changes in core temperature)=QTcV–14 (37.5 – Tc). Furthermore, cooled dogs were re-warmed (from 34.2 to 40.0 °C) and marked QTcV shortening (−29%) was induced. After Tc correction, using the above formula, this decrease was abolished. In these re-warmed dogs, we observed significant increases in T-wave amplitude and in serum [K+] levels. No arrhythmias or increase in pro-arrhythmic biomarkers were observed. In exercising dogs, the above formula completely compensated QTcV for the temperature increase. Conclusions and implications: This study shows the importance of correcting QTcV intervals for changes in Tc, to avoid misleading interpretations of apparent QTcV interval changes. We recommend that all ICH S7A, conscious animal safety studies should routinely measure core body temperature and correct QTcV appropriately, if body temperature and heart rate changes are observed. PMID:18574451
Riding on irrelevant operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Rham, Claudia; Ribeiro, Raquel H., E-mail: Claudia.deRham@case.edu, E-mail: RaquelHRibeiro@case.edu
2014-11-01
We investigate the stability of a class of derivative theories known as P(X) and Galileons against corrections generated by quantum effects. We use an exact renormalisation group approach to argue that these theories are stable under quantum corrections at all loops in regions where the kinetic term is large compared to the strong coupling scale. This is the regime of interest for screening or Vainshtein mechanisms, and in inflationary models that rely on large kinetic terms. Next, we clarify the role played by the symmetries. While symmetries protect the form of the quantum corrections, theories equipped with more symmetries domore » not necessarily have a broader range of scales for which they are valid. We show this by deriving explicitly the regime of validity of the classical solutions for P(X) theories including Dirac-Born-Infeld (DBI) models, both in generic and for specific background field configurations. Indeed, we find that despite the existence of an additional symmetry, the DBI effective field theory has a regime of validity similar to an arbitrary P(X) theory. We explore the implications of our results for both early and late universe contexts. Conversely, when applied to static and spherical screening mechanisms, we deduce that the regime of validity of typical power-law P(X) theories is much larger than that of DBI.« less
New window into stochastic gravitational wave background.
Rotti, Aditya; Souradeep, Tarun
2012-11-30
A stochastic gravitational wave background (SGWB) would gravitationally lens the cosmic microwave background (CMB) photons. We correct the results provided in existing literature for modifications to the CMB polarization power spectra due to lensing by gravitational waves. Weak lensing by gravitational waves distorts all four CMB power spectra; however, its effect is most striking in the mixing of power between the E mode and B mode of CMB polarization. This suggests the possibility of using measurements of the CMB angular power spectra to constrain the energy density (Ω(GW)) of the SGWB. Using current data sets (QUAD, WMAP, and ACT), we find that the most stringent constraints on the present Ω(GW) come from measurements of the angular power spectra of CMB temperature anisotropies. In the near future, more stringent bounds on Ω(GW) can be expected with improved upper limits on the B modes of CMB polarization. Any detection of B modes of CMB polarization above the expected signal from large scale structure lensing could be a signal for a SGWB.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Effects of age, gender and educational background on strength of motivation for medical school.
Kusurkar, Rashmi; Kruitwagen, Cas; ten Cate, Olle; Croiset, Gerda
2010-08-01
The aim of this study was to determine the effects of selection, educational background, age and gender on strength of motivation to attend and pursue medical school. Graduate entry (GE) medical students (having Bachelor's degree in Life Sciences or related field) and Non-Graduate Entry (NGE) medical students (having only completed high school), were asked to fill out the Strength of Motivation for Medical School (SMMS) questionnaire at the start of medical school. The questionnaire measures the willingness of the medical students to pursue medical education even in the face of difficulty and sacrifice. GE students (59.64 ± 7.30) had higher strength of motivation as compared to NGE students (55.26 ± 8.33), so did females (57.05 ± 8.28) as compared to males (54.30 ± 8.08). 7.9% of the variance in the SMMS scores could be explained with the help of a linear regression model with age, gender and educational background/selection as predictor variables. Age was the single largest predictor. Maturity, taking developmental differences between sexes into account, was used as a predictor to correct for differences in the maturation of males and females. Still, the gender differences prevailed, though they were reduced. Pre-entrance educational background and selection also predicted the strength of motivation, but the effect of the two was confounded. Strength of motivation appears to be a dynamic entity, changing primarily with age and maturity and to a small extent with gender and experience.
Zγ production at NNLO including anomalous couplings
NASA Astrophysics Data System (ADS)
Campbell, John M.; Neumann, Tobias; Williams, Ciaran
2017-11-01
In this paper we present a next-to-next-to-leading order (NNLO) QCD calculation of the processes pp → l + l -γ and pp\\to ν \\overline{ν}γ that we have implemented in MCFM. Our calculation includes QCD corrections at NNLO both for the Standard Model (SM) and additionally in the presence of Zγγ and ZZγ anomalous couplings. We compare our implementation, obtained using the jettiness slicing approach, with a previous SM calculation and find broad agreement. Focusing on the sensitivity of our results to the slicing parameter, we show that using our setup we are able to compute NNLO cross sections with numerical uncertainties of about 0.1%, which is small compared to residual scale uncertainties of a few percent. We study potential improvements using two different jettiness definitions and the inclusion of power corrections. At √{s}=13 TeV we present phenomenological results and consider Zγ as a background to H → Zγ production. We find that, with typical cuts, the inclusion of NNLO corrections represents a small effect and loosens the extraction of limits on anomalous couplings by about 10%.
Efficient genomic correction methods in human iPS cells using CRISPR-Cas9 system.
Li, Hongmei Lisa; Gee, Peter; Ishida, Kentaro; Hotta, Akitsu
2016-05-15
Precise gene correction using the CRISPR-Cas9 system in human iPS cells holds great promise for various applications, such as the study of gene functions, disease modeling, and gene therapy. In this review article, we summarize methods for effective editing of genomic sequences of iPS cells based on our experiences correcting dystrophin gene mutations with the CRISPR-Cas9 system. Designing specific sgRNAs as well as having efficient transfection methods and proper detection assays to assess genomic cleavage activities are critical for successful genome editing in iPS cells. In addition, because iPS cells are fragile by nature when dissociated into single cells, a step-by-step confirmation during the cell recovery process is recommended to obtain an adequate number of genome-edited iPS cell clones. We hope that the techniques described here will be useful for researchers from diverse backgrounds who would like to perform genome editing in iPS cells. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Virtual k -Space Modulation Optical Microscopy
NASA Astrophysics Data System (ADS)
Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Zheng, Guoan; Fang, Yue; Xu, Yingke; Liu, Xu; So, Peter T. C.
2016-07-01
We report a novel superresolution microscopy approach for imaging fluorescence samples. The reported approach, termed virtual k -space modulation optical microscopy (VIKMOM), is able to improve the lateral resolution by a factor of 2, reduce the background level, improve the optical sectioning effect and correct for unknown optical aberrations. In the acquisition process of VIKMOM, we used a scanning confocal microscope setup with a 2D detector array to capture sample information at each scanned x -y position. In the recovery process of VIKMOM, we first modulated the captured data by virtual k -space coding and then employed a ptychography-inspired procedure to recover the sample information and correct for unknown optical aberrations. We demonstrated the performance of the reported approach by imaging fluorescent beads, fixed bovine pulmonary artery endothelial (BPAE) cells, and living human astrocytes (HA). As the VIKMOM approach is fully compatible with conventional confocal microscope setups, it may provide a turn-key solution for imaging biological samples with ˜100 nm lateral resolution, in two or three dimensions, with improved optical sectioning capabilities and aberration correcting.
Correcting geometric and photometric distortion of document images on a smartphone
NASA Astrophysics Data System (ADS)
Simon, Christian; Williem; Park, In Kyu
2015-01-01
A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.
An efficient empirical Bayes method for genomewide association studies.
Wang, Q; Wei, J; Pan, Y; Xu, S
2016-08-01
Linear mixed model (LMM) is one of the most popular methods for genomewide association studies (GWAS). Numerous forms of LMM have been developed; however, there are two major issues in GWAS that have not been fully addressed before. The two issues are (i) the genomic background noise and (ii) low statistical power after Bonferroni correction. We proposed an empirical Bayes (EB) method by assigning each marker effect a normal prior distribution, resulting in shrinkage estimates of marker effects. We found that such a shrinkage approach can selectively shrink marker effects and reduce the noise level to zero for majority of non-associated markers. In the meantime, the EB method allows us to use an 'effective number of tests' to perform Bonferroni correction for multiple tests. Simulation studies for both human and pig data showed that EB method can significantly increase statistical power compared with the widely used exact GWAS methods, such as GEMMA and FaST-LMM-Select. Real data analyses in human breast cancer identified improved detection signals for markers previously known to be associated with breast cancer. We therefore believe that EB method is a valuable tool for identifying the genetic basis of complex traits. © 2015 Blackwell Verlag GmbH.
Releasing effects in flame photometry: Determination of calcium
Dinnin, J.I.
1960-01-01
Strontium, lanthanum, neodymium, samarium, and yttrium completely release the flame emission of calcium from the depressive effects of sulfate, phosphate, and aluminate. Magnesium, beryllium, barium, and scandium release most of the calcium emission. These cations, when present in high concentration, preferentially form compounds with the depressing anions when the solution is evaporated rapidly in the flame. The mechanism of the interference and releasing effects is explained on the basis of the chemical equilibria in the evaporating droplets of solution and is shown to depend upon the nature of the compounds present in the aqueous phase of the solution. The need for background correction techniques is stressed. The releasing effect is used in the determination of calcium in silicate rocks without the need for separations.
QED loop effects in the spacetime background of a Schwarzschild black hole
NASA Astrophysics Data System (ADS)
Emelyanov, Viacheslav A.
2017-12-01
The black-hole evaporation implies that the quantum-field propagators in a local Minkowski frame acquire a correction, which gives rise to this process. The modification of the propagators causes, in turn, non-trivial local effects due to the radiative/loop diagrams in non-linear QFTs. In particular, there should be imprints of the evaporation in QED, if one goes beyond the tree-level approximation. Of special interest in this respect is the region near the black-hole horizon, which, already at tree level, appears to show highly non-classical features, e.g., negative energy density and energy flux into the black hole.
2009-01-01
Background Early developmental interventions to prevent the high rate of neurodevelopmental problems in very preterm children, including cognitive, motor and behavioral impairments, are urgently needed. These interventions should be multi-faceted and include modules for caregivers given their high rates of mental health problems. Methods/Design We have designed a randomized controlled trial to assess the effectiveness of a preventative care program delivered at home over the first 12 months of life for infants born very preterm (<30 weeks of gestational age) and their families, compared with standard medical follow-up. The aim of the program, delivered over nine sessions by a team comprising a physiotherapist and psychologist, is to improve infant development (cognitive, motor and language), behavioral regulation, caregiver-child interactions and caregiver mental health at 24 months' corrected age. The infants will be stratified by severity of brain white matter injury (assessed by magnetic resonance imaging) at term equivalent age, and then randomized. At 12 months' corrected age interim outcome measures will include motor development assessed using the Alberta Infant Motor Scale and the Neurological Sensory Motor Developmental Assessment. Caregivers will also complete a questionnaire at this time to obtain information on behavior, parenting, caregiver mental health, and social support. The primary outcomes are at 24 months' corrected age and include cognitive, motor and language development assessed with the Bayley Scales of Infant and Toddler Development (Bayley-III). Secondary outcomes at 24 months include caregiver-child interaction measured using an observational task, and infant behavior, parenting, caregiver mental health and social support measured via standardized parental questionnaires. Discussion This paper presents the background, study design and protocol for a randomized controlled trial in very preterm infants utilizing a preventative care program in the first year after discharge home designed to improve cognitive, motor and behavioral outcomes of very preterm children and caregiver mental health at two-years' corrected age. Clinical Trial Registration Number ACTRN12605000492651 PMID:19954550
NASA Astrophysics Data System (ADS)
Nara, H.; Tanimoto, H.; Tohjima, Y.; Mukai, H.; Nojiri, Y.; Katsumata, K.; Rella, C.
2012-07-01
We examined potential interferences from water vapor and atmospheric background gases (N2, O2, and Ar), and biases by isotopologues of target species, on accurate measurement of atmospheric CO2 and CH4 by means of wavelength-scanned cavity ring-down spectroscopy (WS-CRDS). Variations in the composition of the background gas substantially impacted the CO2 and CH4 measurements: the measured amounts of CO2 and CH4 decreased with increasing N2 mole fraction, but increased with increasing O2 and Ar, suggesting that the pressure-broadening effects (PBEs) increased as Ar < O2 < N2. Using these experimental results, we inferred PBEs for the measurement of synthetic standard gases. The PBEs were negligible (up to 0.05 ppm for CO2 and 0.01 ppb for CH4) for gas standards balanced with purified air, although the PBEs were substantial (up to 0.87 ppm for CO2 and 1.4 ppb for CH4) for standards balanced with synthetic air. For isotopic biases on CO2 measurements, we compared experimental results and theoretical calculations, which showed excellent agreement within their uncertainty. We derived empirical correction functions for water vapor for three WS-CRDS instruments (Picarro EnviroSense 3000i, G-1301, and G-2301). Although the transferability of the functions was not clear, no significant difference was found in the water vapor correction values among these instruments within the typical analytical precision at sufficiently low water concentrations (< 0.3%V for CO2 and < 0.4%V for CH4). For accurate measurements of CO2 and CH4 in ambient air, we concluded that WS-CRDS measurements should be performed under complete dehumidification of air samples, or moderate dehumidification followed by application of a water vapor correction function, along with calibration by natural air-based standard gases or purified air-balanced synthetic standard gases with isotopic correction.
"Hook"-calibration of GeneChip-microarrays: theory and algorithm.
Binder, Hans; Preibisch, Stephan
2008-08-29
: The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. : We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. : The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.
NASA Technical Reports Server (NTRS)
Susko, M.
1979-01-01
The purpose of this experimental research was to compare Marshall Space Flight Center's electrets with Thiokol's fixed flow air samplers during the Space Shuttle Solid Rocket Booster Demonstration Model-3 static test firing on October 19, 1978. The measurement of rocket exhaust effluents by Thiokol's samplers and MSFC's electrets indicated that the firing of the Solid Rocket Booster had no significant effect on the quality of the air sampled. The highest measurement by Thiokol's samplers was obtained at Plant 3 (site 11) approximately 8 km at a 113 degree heading from the static test stand. At sites 11, 12, and 5, Thiokol's fixed flow air samplers measured 0.0048, 0.00016, and 0.00012 mg/m3 of CI. Alongside the fixed flow measurements, the electret counts from X-ray spectroscopy were 685, 894, and 719 counts. After background corrections, the counts were 334, 543, and 368, or an average of 415 counts. An additional electred, E20, which was the only measurement device at a site approximately 20 km northeast from the test site where no power was available, obtained 901 counts. After background correction, the count was 550. Again this data indicate there was no measurement of significant rocket exhaust effluents at the test site.
NASA Astrophysics Data System (ADS)
Kandori, Ryo; Tamura, Motohide; Nagata, Tetsuya; Tomisaka, Kohji; Kusakabe, Nobuhiko; Nakajima, Yasushi; Kwon, Jungmi; Nagayama, Takahiro; Tatematsu, Ken’ichi
2018-04-01
The relationship between dust polarization and extinction was determined for the cold dense starless molecular cloud core FeSt 1-457 based on the background star polarimetry of dichroic extinction at near-infrared wavelengths. Owing to the known (three-dimensional) magnetic field structure, the observed polarizations from the core were corrected by considering (a) the subtraction of the ambient polarization component, (b) the depolarization effect of inclined distorted magnetic fields, and (c) the magnetic inclination angle of the core. After these corrections, a linear relationship between polarization and extinction was obtained for the core in the range up to A V ≈ 20 mag. The initial polarization versus extinction diagram changed dramatically after the corrections of (a) to (c), with the correlation coefficient being refined from 0.71 to 0.79. These corrections should affect the theoretical interpretation of the observational data. The slope of the finally obtained polarization–extinction relationship is {P}H/{E}H-{Ks}=11.00+/- 0.72 % {mag}}-1, which is close to the statistically estimated upper limit of the interstellar polarization efficiency. This consistency suggests that the upper limit of interstellar polarization efficiency might be determined by the observational viewing angle toward polarized astronomical objects.
High throughput film dosimetry in homogeneous and heterogeneous media for a small animal irradiator
Wack, L.; Ngwa, W.; Tryggestad, E.; Tsiamas, P.; Berbeco, R.; Ng, S.K.; Hesser, J.
2013-01-01
Purpose We have established a high-throughput Gafchromic film dosimetry protocol for narrow kilo-voltage beams in homogeneous and heterogeneous media for small-animal radiotherapy applications. The kV beam characterization is based on extensive Gafchromic film dosimetry data acquired in homogeneous and heterogeneous media. An empirical model is used for parameterization of depth and off-axis dependence of measured data. Methods We have modified previously published methods of film dosimetry to suit the specific tasks of the study. Unlike film protocols used in previous studies, our protocol employs simultaneous multichannel scanning and analysis of up to nine Gafchromic films per scan. A scanner and background correction were implemented to improve accuracy of the measurements. Measurements were taken in homogeneous and inhomogeneous phantoms at 220 kVp and a field size of 5 × 5 mm2. The results were compared against Monte Carlo simulations. Results Dose differences caused by variations in background signal were effectively removed by the corrections applied. Measurements in homogeneous phantoms were used to empirically characterize beam data in homogeneous and heterogeneous media. Film measurements in inhomogeneous phantoms and their empirical parameterization differed by about 2%–3%. The model differed from MC by about 1% (water, lung) to 7% (bone). Good agreement was found for measured and modelled off-axis ratios. Conclusions EBT2 films are a valuable tool for characterization of narrow kV beams, though care must be taken to eliminate disturbances caused by varying background signals. The usefulness of the empirical beam model in interpretation and parameterization of film data was demonstrated. PMID:23510532
NASA Astrophysics Data System (ADS)
Hervo, Maxime; Poltera, Yann; Haefele, Alexander
2016-07-01
Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.
Quantum Yang-Mills Dark Energy
NASA Astrophysics Data System (ADS)
Pasechnik, Roman
2016-02-01
In this short review, I discuss basic qualitative characteristics of quantum non-Abelian gauge dynamics in the non-stationary background of the expanding Universe in the framework of the standard Einstein--Yang--Mills formulation. A brief outlook of existing studies of cosmological Yang--Mills fields and their properties will be given. Quantum effects have a profound impact on the gauge field-driven cosmological evolution. In particular, a dynamical formation of the spatially-homogeneous and isotropic gauge field condensate may be responsible for both early and late-time acceleration, as well as for dynamical compensation of non-perturbative quantum vacua contributions to the ground state of the Universe. The main properties of such a condensate in the effective QCD theory at the flat Friedmann--Lema\\'itre--Robertson--Walker (FLRW) background will be discussed within and beyond perturbation theory. Finally, a phenomenologically consistent dark energy can be induced dynamically as a remnant of the QCD vacua compensation arising from leading-order graviton-mediated corrections to the QCD ground state.
[Microhemocirculation and its correction in duodenal ulcer during period of rehabilitation].
Parpibaeva, D A; Zakirkhodzhaev, Sh Ia; Sagatov, T A; Shakirova, D T; Narziev, N M
2009-01-01
The background of this research is to study morphological and functional microcirculatory channel condition with duodenum ulcer in the period of rehabilitation against the background of regular antiulcer therapy (1 group) and further treatment using Vazonit (2 group) in clinical conditions. EDU in animals results in marked microcirculatory disease in duodenum depending on the time of ulcer process development. Hypoxia is to be the significant factor associated with capillary stases, venous congestion. Blood flow impairment in organ results in metabolic damages of tissue structures. The results obtained are evidence of significant correction of microcirculatory channel state, improvement of regeneration and reparation processes. Vazonit improves the disorder of microcirculation and theological blood properties, restoring of macro and microangiopathy changes of hemocirculatory channel.
Demonstration of electronic design automation flow for massively parallel e-beam lithography
NASA Astrophysics Data System (ADS)
Brandt, Pieter; Belledent, Jérôme; Tranquillin, Céline; Figueiro, Thiago; Meunier, Stéfanie; Bayle, Sébastien; Fay, Aurélien; Milléquant, Matthieu; Icard, Beatrice; Wieland, Marco
2014-07-01
For proximity effect correction in 5 keV e-beam lithography, three elementary building blocks exist: dose modulation, geometry (size) modulation, and background dose addition. Combinations of these three methods are quantitatively compared in terms of throughput impact and process window (PW). In addition, overexposure in combination with negative bias results in PW enhancement at the cost of throughput. In proximity effect correction by over exposure (PEC-OE), the entire layout is set to fixed dose and geometry sizes are adjusted. In PEC-dose to size (DTS) both dose and geometry sizes are locally optimized. In PEC-background (BG), a background is added to correct the long-range part of the point spread function. In single e-beam tools (Gaussian or Shaped-beam), throughput heavily depends on the number of shots. In raster scan tools such as MAPPER Lithography's FLX 1200 (MATRIX platform) this is not the case and instead of pattern density, the maximum local dose on the wafer is limiting throughput. The smallest considered half-pitch is 28 nm, which may be considered the 14-nm node for Metal-1 and the 10-nm node for the Via-1 layer, achieved in a single exposure with e-beam lithography. For typical 28-nm-hp Metal-1 layouts, it was shown that dose latitudes (size of process window) of around 10% are realizable with available PEC methods. For 28-nm-hp Via-1 layouts this is even higher at 14% and up. When the layouts do not reach the highest densities (up to 10∶1 in this study), PEC-BG and PEC-OE provide the capability to trade throughput for dose latitude. At the highest densities, PEC-DTS is required for proximity correction, as this method adjusts both geometry edges and doses and will reduce the dose at the densest areas. For 28-nm-hp lines critical dimension (CD), hole&dot (CD) and line ends (edge placement error), the data path errors are typically 0.9, 1.0 and 0.7 nm (3σ) and below, respectively. There is not a clear data path performance difference between the investigated PEC methods. After the simulations, the methods were successfully validated in exposures on a MAPPER pre-alpha tool. A 28-nm half pitch Metal-1 and Via-1 layouts show good performance in resist that coincide with the simulation result. Exposures of soft-edge stitched layouts show that beam-to-beam position errors up to ±7 nm specified for FLX 1200 show no noticeable impact on CD. The research leading to these results has been performed in the frame of the industrial collaborative consortium IMAGINE.
Acquisition and processing of data for isotope-ratio-monitoring mass spectrometry
NASA Technical Reports Server (NTRS)
Ricci, M. P.; Merritt, D. A.; Freeman, K. H.; Hayes, J. M.
1994-01-01
Methods are described for continuous monitoring of signals required for precise analyses of 13C, 18O, and 15N in gas streams containing varying quantities of CO2 and N2. The quantitative resolution (i.e. maximum performance in the absence of random errors) of these methods is adequate for determination of isotope ratios with an uncertainty of one part in 10(5); the precision actually obtained is often better than one part in 10(4). This report describes data-processing operations including definition of beginning and ending points of chromatographic peaks and quantitation of background levels, allowance for effects of chromatographic separation of isotopically substituted species, integration of signals related to specific masses, correction for effects of mass discrimination, recognition of drifts in mass spectrometer performance, and calculation of isotopic delta values. Characteristics of a system allowing off-line revision of parameters used in data reduction are described and an algorithm for identification of background levels in complex chromatograms is outlined. Effects of imperfect chromatographic resolution are demonstrated and discussed and an approach to deconvolution of signals from coeluting substances described.
NASA Astrophysics Data System (ADS)
D'Alessandro, Giuseppe; de Bernardis, Paolo; di Tano, Silvio; Masi, Silvia; Mele, Lorenzo
2017-09-01
The spectroscopic measurement of the Cosmic Microwave Background at mm and sub-mm wavelengths received significant attention recently, aimed at measuring tiny spectral distortions of the Cosmic Microwave Background (CMB) relevant for cosmology. Several experiments, including OLIMPO (Masi et al. 2003), PRISM (André et al., 2014), MILLIMETRON (Smirnov and Baryshev, 2012), PIXIE (Kogut and Fixsen, 2011) are based on a Martin-Puplett Fourier-transform spectrometer. Its differential capabilities are the key to success in these difficult measurements. The polarizing beam splitter is the optical core of a Martin-Puplett interferometer. In this paper we analyze, analytically and experimentally, the systematic effects induced by a beam splitter orientation different from the canonical 45 ° . These effects are potenitally important for the delicate measurements of CMB spectral distortions. We find an analytical formula describing the effect, and verify experimentally, in the range 150-600 GHz, that our formula correctly describes the results (with a C.L. of 88 %). We also demonstrate that the rotation of the beam splitter does not induce distortions in the measured spectra.
NASA Astrophysics Data System (ADS)
Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven
2008-03-01
Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.
Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho; Ito, Shigeru; Sano, Yoshitaka; Sato, Mayumi; Kanno, Toshihiko; Okada, Hiroyuki; Torizuka, Tatsuo; Nishizawa, Sadahiko
2014-06-01
While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV.
Anomaly-corrected supersymmetry algebra and supersymmetric holographic renormalization
NASA Astrophysics Data System (ADS)
An, Ok Song
2017-12-01
We present a systematic approach to supersymmetric holographic renormalization for a generic 5D N=2 gauged supergravity theory with matter multiplets, including its fermionic sector, with all gauge fields consistently set to zero. We determine the complete set of supersymmetric local boundary counterterms, including the finite counterterms that parameterize the choice of supersymmetric renormalization scheme. This allows us to derive holographically the superconformal Ward identities of a 4D superconformal field theory on a generic background, including the Weyl and super-Weyl anomalies. Moreover, we show that these anomalies satisfy the Wess-Zumino consistency condition. The super-Weyl anomaly implies that the fermionic operators of the dual field theory, such as the supercurrent, do not transform as tensors under rigid supersymmetry on backgrounds that admit a conformal Killing spinor, and their anticommutator with the conserved supercharge contains anomalous terms. This property is explicitly checked for a toy model. Finally, using the anomalous transformation of the supercurrent, we obtain the anomaly-corrected supersymmetry algebra on curved backgrounds admitting a conformal Killing spinor.
LWIR pupil imaging and prospects for background compensation
NASA Astrophysics Data System (ADS)
LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg
2015-08-01
A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.
Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long
2014-09-12
Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
[An unpublished contribution of Melanie Klein "On Reassurance"].
Frank, Claudia; Klein, Melanie
2005-01-01
Melanie Klein's unpublished paper on reassurance is presented in German translation. The author shows that it was a contribution to Glover's investigation on psychoanalytic technique in the 1930s. The paper is discussed against the background of the technical discussions conducted in London at that time (e. g. M. Schmideberg, J. Strachey) and of Klein's relevant publications. Although Klein consistently considered "correct" interpretation to be the most effective means of reassurance, she occasionally also accepted a non-interpreting approach. In this respect the paper presented here goes further than any other of her writings.
Invariant measure of the one-loop quantum gravitational backreaction on inflation
NASA Astrophysics Data System (ADS)
Miao, S. P.; Tsamis, N. C.; Woodard, R. P.
2017-06-01
We use dimensional regularization in pure quantum gravity on a de Sitter background to evaluate the one-loop expectation value of an invariant operator which gives the local expansion rate. We show that the renormalization of this nonlocal composite operator can be accomplished using the counterterms of a simple local theory of gravity plus matter, at least at one-loop order. This renormalization completely absorbs the one-loop correction, which accords with the prediction that the lowest secular backreaction should be a two-loop effect.
NASA Astrophysics Data System (ADS)
Tang, Xian-Zhu; McDevitt, C. J.; Guo, Zehua; Berk, H. L.
2014-03-01
Inertial confinement fusion requires an imploded target in which a central hot spot is surrounded by a cold and dense pusher. The hot spot/pusher interface can take complicated shape in three dimensions due to hydrodynamic mix. It is also a transition region where the Knudsen and inverse Knudsen layer effect can significantly modify the fusion reactivity in comparison with the commonly used value evaluated with background Maxwellians. Here, we describe a hybrid model that couples the kinetic correction of fusion reactivity to global hydrodynamic implosion simulations. The key ingredient is a non-perturbative treatment of the tail ions in the interface region where the Gamow ion Knudsen number approaches or surpasses order unity. The accuracy of the coupling scheme is controlled by the precise criteria for matching the non-perturbative kinetic model to perturbative solutions in both configuration space and velocity space.
NASA Astrophysics Data System (ADS)
Ren, Wenyi; Cao, Qizhi; Wu, Dan; Jiang, Jiangang; Yang, Guoan; Xie, Yingge; Wang, Guodong; Zhang, Sheqi
2018-01-01
Many observers using interference imaging spectrometer were plagued by the fringe-like pattern(FP) that occurs for optical wavelengths in red and near-infrared region. It brings us more difficulties in the data processing such as the spectrum calibration, information retrieval, and so on. An adaptive method based on the bi-dimensional empirical mode decomposition was developed to suppress the nonlinear FP in polarization interference imaging spectrometer. The FP and corrected interferogram were separated effectively. Meanwhile, the stripes introduced by CCD mosaic was suppressed. The nonlinear interferogram background removal and the spectrum distortion correction were implemented as well. It provides us an alternative method to adaptively suppress the nonlinear FP without prior experimental data and knowledge. This approach potentially is a powerful tool in the fields of Fourier transform spectroscopy, holographic imaging, optical measurement based on moire fringe, etc.
Entanglement entropy of ABJM theory and entropy of topological black hole
NASA Astrophysics Data System (ADS)
Nian, Jun; Zhang, Xinyu
2017-07-01
In this paper we discuss the supersymmetric localization of the 4D N = 2 offshell gauged supergravity on the background of the AdS4 neutral topological black hole, which is the gravity dual of the ABJM theory defined on the boundary {S}^1× H^2 . We compute the large- N expansion of the supergravity partition function. The result gives the black hole entropy with the logarithmic correction, which matches the previous result of the entanglement entropy of the ABJM theory up to some stringy effects. Our result is consistent with the previous on-shell one-loop computation of the logarithmic correction to black hole entropy. It provides an explicit example of the identification of the entanglement entropy of the boundary conformal field theory with the bulk black hole entropy beyond the leading order given by the classical Bekenstein-Hawking formula, which consequently tests the AdS/CFT correspondence at the subleading order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orbaker, Douglas Andrew
We present a measurement of forward-backward asymmetries in top-antitop quark pairs produced in proton-antiproton collisions decaying via the lepton+jets channel. Using data recorded by the D0 experiment at the Fermilab Tevatron collider and corresponding to an integrated luminosity of 5.4 fb -1, we measure the forward-backward asymmetry in top-antitop quark events to bemore » $$\\left(9.2 \\pm 3.7\\right)\\%$$, after background processes have been subtracted. After correcting for the effects of acceptance and detector reconstruction, we measure an asymmetry of $$\\left(19.6 \\pm 6.5\\right)\\%$$. In addition, we measure an acceptance-corrected asymmetry based on the lepton from top-antitop quark decay of $$\\left(15.2 \\pm 4.0\\right)\\%$$. We compare these results to predictions from the MC@NLO next-to-leading-order QCD simulation.« less
Color standardization in whole slide imaging using a color calibration slide
Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako
2014-01-01
Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739
Object tracking algorithm based on the color histogram probability distribution
NASA Astrophysics Data System (ADS)
Li, Ning; Lu, Tongwei; Zhang, Yanduo
2018-04-01
In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.
Renzette, Nicholas; Kowalik, Timothy F; Jensen, Jeffrey D
2016-01-01
A central focus of population genetics has been examining the contribution of selective and neutral processes in shaping patterns of intraspecies diversity. In terms of selection specifically, surveys of higher organisms have shown considerable variation in the relative contributions of background selection and genetic hitchhiking in shaping the distribution of polymorphisms, although these analyses have rarely been extended to bacteria and viruses. Here, we study the evolution of a ubiquitous, viral pathogen, human cytomegalovirus (HCMV), by analysing the relationship among intraspecies diversity, interspecies divergence and rates of recombination. We show that there is a strong correlation between diversity and divergence, consistent with expectations of neutral evolution. However, after correcting for divergence, there remains a significant correlation between intraspecies diversity and recombination rates, with additional analyses suggesting that this correlation is largely due to the effects of background selection. In addition, a small number of loci, centred on long noncoding RNAs, also show evidence of selective sweeps. These data suggest that HCMV evolution is dominated by neutral mechanisms as well as background selection, expanding our understanding of linked selection to a novel class of organisms. © 2015 John Wiley & Sons Ltd.
Xu, Yihua; Pitot, Henry C
2006-03-01
In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.
Energy spectrum of argon ions emitted from Filippov type Sahand plasma focus.
Mohammadnejad, M; Pestehe, S J; Mohammadi, M A
2013-07-01
The energy and flux of the argon ions produced in Sahand plasma focus have been measured by employing a well-designed Faraday cup. The secondary electron emission effects on the ion signals are simulated and the dimensions of Faraday cup are optimized to minimize these effects. The measured ion energy spectrum is corrected for the ion energy loss and charge exchange in the background gas. The effects of the capacitor bank voltage and working gas pressure on the ion energy spectrum are also investigated. It has been shown that the emitted ion number per energy increases as the capacitor bank voltage increases. Decreasing the working gas pressure leads to the increase in the number of emitted ion per energy.
Effects of age, gender and educational background on strength of motivation for medical school
Kruitwagen, Cas; ten Cate, Olle; Croiset, Gerda
2009-01-01
The aim of this study was to determine the effects of selection, educational background, age and gender on strength of motivation to attend and pursue medical school. Graduate entry (GE) medical students (having Bachelor’s degree in Life Sciences or related field) and Non-Graduate Entry (NGE) medical students (having only completed high school), were asked to fill out the Strength of Motivation for Medical School (SMMS) questionnaire at the start of medical school. The questionnaire measures the willingness of the medical students to pursue medical education even in the face of difficulty and sacrifice. GE students (59.64 ± 7.30) had higher strength of motivation as compared to NGE students (55.26 ± 8.33), so did females (57.05 ± 8.28) as compared to males (54.30 ± 8.08). 7.9% of the variance in the SMMS scores could be explained with the help of a linear regression model with age, gender and educational background/selection as predictor variables. Age was the single largest predictor. Maturity, taking developmental differences between sexes into account, was used as a predictor to correct for differences in the maturation of males and females. Still, the gender differences prevailed, though they were reduced. Pre-entrance educational background and selection also predicted the strength of motivation, but the effect of the two was confounded. Strength of motivation appears to be a dynamic entity, changing primarily with age and maturity and to a small extent with gender and experience. PMID:19774476
Robust finger vein ROI localization based on flexible segmentation.
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-10-24
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.
[A correct understanding of preservatives in eye drops].
Liu, Zuguo; Huang, Caihong
2015-09-01
Eye drops are the most commonly used preparations in ophthalmology. Preservatives are usually added in order to protect eye drops against pathogenic organisms and increase the solubility of the drugs in multi-dose containers. Ophthalmologists have paid a lot of attention to the preservatives in eye drops because they remain one of the main reasons for ocular surface damage, and even may lead to serious visual impairment in patients with inappropriate use of eye drops. However, it should be noted that the dangers of preservatives become overstated nowadays. It is necessary to completely evaluate the effects of preservatives in ophthalmic preparations, so that ophthalmologists can guide patients to correctly select eye drops containing preservatives and avoid dangerous side effects, according to their eye disease situation, state of tear function and ocular surface changes, cultural background and financial income, cost and benefit and convenience of the use of drugs, and other factors. The direction of the future development in this field is to establish the clinical guideline for use of eye drops containing preservatives, carry out continuing education courses on preservatives and develop ideal preservatives.
Robust Finger Vein ROI Localization Based on Flexible Segmentation
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-01-01
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769
RSA and its Correctness through Modular Arithmetic
NASA Astrophysics Data System (ADS)
Meelu, Punita; Malik, Sitender
2010-11-01
To ensure the security to the applications of business, the business sectors use Public Key Cryptographic Systems (PKCS). An RSA system generally belongs to the category of PKCS for both encryption and authentication. This paper describes an introduction to RSA through encryption and decryption schemes, mathematical background which includes theorems to combine modular equations and correctness of RSA. In short, this paper explains some of the maths concepts that RSA is based on, and then provides a complete proof that RSA works correctly. We can proof the correctness of RSA through combined process of encryption and decryption based on the Chinese Remainder Theorem (CRT) and Euler theorem. However, there is no mathematical proof that RSA is secure, everyone takes that on trust!.
Delegation in Correctional Nursing Practice.
Tompkins, Frances
2016-07-01
Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. © The Author(s) 2016.
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards.
Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Laegreid, Astrid
2007-10-18
The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish.
Shinozaki, Kazuma; Zack, Jason W.; Richards, Ryan M.; ...
2015-07-22
The rotating disk electrode (RDE) technique is being extensively used as a screening tool to estimate the activity of novel PEMFC electrocatalysts synthesized in lab-scale (mg) quantities. Discrepancies in measured activity attributable to glassware and electrolyte impurity levels, as well as conditioning, protocols and corrections are prevalent in the literature. Moreover, the electrochemical response to a broad spectrum of commercially sourced perchloric acid and the effect of acid molarity on impurity levels and solution resistance were also assessed. Our findings reveal that an area specific activity (SA) exceeding 2.0 mA/cm 2 (20 mV/s, 25°C, 100 kPa, 0.1 M HClO 4)more » for polished poly-Pt is an indicator of impurity levels that do not impede the accurate measurement of the ORR activity of Pt based catalysts. After exploring various conditioning protocols to approach maximum utilization of the electrochemical area (ECA) and peak ORR activity without introducing catalyst degradation, an investigation of measurement protocols for ECA and ORR activity was conducted. Down-selected protocols were based on the criteria of reproducibility, duration of experiments, impurity effects and magnitude of pseudo-capacitive background correction. In sum, statistical reproducibility of ORR activity for poly-Pt and Pt supported on high surface area carbon was demonstrated.« less
What is the lifetime risk of developing cancer?: the effect of adjusting for multiple primaries
Sasieni, P D; Shelton, J; Ormiston-Smith, N; Thomson, C S; Silcocks, P B
2011-01-01
Background: The ‘lifetime risk' of cancer is generally estimated by combining current incidence rates with current all-cause mortality (‘current probability' method) rather than by describing the experience of a birth cohort. As individuals may get more than one type of cancer, what is generally estimated is the average (mean) number of cancers over a lifetime. This is not the same as the probability of getting cancer. Methods: We describe a method for estimating lifetime risk that corrects for the inclusion of multiple primary cancers in the incidence rates routinely published by cancer registries. The new method applies cancer incidence rates to the estimated probability of being alive without a previous cancer. The new method is illustrated using data from the Scottish Cancer Registry and is compared with ‘gold-standard' estimates that use (unpublished) data on first primaries. Results: The effect of this correction is to make the estimated ‘lifetime risk' smaller. The new estimates are extremely similar to those obtained using incidence based on first primaries. The usual ‘current probability' method considerably overestimates the lifetime risk of all cancers combined, although the correction for any single cancer site is minimal. Conclusion: Estimation of the lifetime risk of cancer should either be based on first primaries or should use the new method. PMID:21772332
Flight Calibration of the LROC Narrow Angle Camera
NASA Astrophysics Data System (ADS)
Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.
2016-04-01
Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.
ForCent model development and testing using the Enriched Background Isotope Study experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parton, W.J.; Hanson, P. J.; Swanston, C.
The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulatesmore » the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less
ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parton, William; Hanson, Paul J; Swanston, Chris
The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamicsmore » of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less
Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA
NASA Astrophysics Data System (ADS)
Donovan, J.; Singer, J.; Armstrong, J. T.
2016-12-01
Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.
The gamma ray north-south effect
NASA Technical Reports Server (NTRS)
White, R. S.; O'Neill, T. J.; Tumer, O. T.; Zych, A. D.
1988-01-01
Theoretical calculations are presented that explain the balloon observations by O'Neill et al. (1987) of a strong north-south anisotropy of atmospheric gamma rays over the Southern Hemisphere, and to predict the north-south ratios. It is shown that the gamma rays that originate at the longest distances from the telescopes give the largest north-south ratios. Comparisons are made of the experimental north-south ratios measured on balloons launched from Alice Springs, Australia, and from Palestine, Texas, U.S., and predictions are made for ratios at other geomagnetic latitudes and longitudes. It is pointed out that observers who measure backgrounds for celestial sources may be misled unless they correct for the north-south effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. M. Fitzmaurice
2001-04-01
The purpose of this Closure Report (CR) is to provide documentation of the completed corrective action at the Test Cell A Leachfield System and to provide data confirming the corrective action. The Test Cell A Leachfield System is identified in the Federal Facility Agreement and Consent Order (FFACO) of 1996 as Corrective Action Unit (CAU) 261. Remediation of CAU 261 is required under the FFACO (1996). CAU 261 is located in Area 25 of the Nevada Test Site (NTS) which is approximately 140 kilometers (87 miles) northwest of Las Vegas, Nevada (Figure 1). CAU 261 consists of two Corrective Actionmore » Sites (CASS): CAS 25-05-01, Leachfield; and CAS 25-05-07, Acid Waste Leach Pit (AWLP) (Figures 2 and 3). Test Cell A was operated during the 1960s and 1970s to support the Nuclear Rocket Development Station. Various operations within Building 3124 at Test Cell A resulted in liquid waste releases to the Leachfield and the AWLP. The following existing site conditions were reported in the Corrective Action Decision Document (CADD) (U.S. Department of Energy, Nevada Operations Office [DOE/NV], 1999): Soil in the leachfield was found to exceed the Nevada Division of Environmental Protection (NDEP) Action Level for petroleum hydrocarbons, the U.S. Environmental Protection Agency (EPA) preliminary remediation goals for semi volatile organic compounds, and background concentrations for strontium-90; Soil below the sewer pipe and approximately 4.5 meters (m) (15 feet [ft]) downstream of the initial outfall was found to exceed background concentrations for cesium-137 and strontium-90; Sludge in the leachfield septic tank was found to exceed the NDEP Action Level for petroleum hydrocarbons and to contain americium-241, cesium-137, uranium-234, uranium-238, potassium-40, and strontium-90; No constituents of concern (COC) were identified at the AWLP. The NDEP-approved CADD (DOWNV, 1999) recommended Corrective Action Alternative 2, ''Closure of the Septic Tank and Distribution Box, Partial Excavation, and Administrative Controls.'' The corrective action was performed following the NDEP-approved Corrective Action Plan (CAP) (DOE/NV, 2000).« less
Evaluation of an improved fiberoptics luminescence skin monitor with background correction.
Vo-Dinh, T
1987-06-01
In this work, an improved version of a fiberoptics luminescence monitor, the prototype luminoscope II, is evaluated for in situ quantitative measurements. The instrument was developed to detect traces of luminescing organic contaminants on skin. An electronic background-nulling system was designed and incorporated into the instrument to compensate for various skin background emissions. A dose-response curve for a coal liquid spotted on mouse skin was established. The results illustrated the usefulness of the instrument for in vivo detection of organic materials on laboratory mouse skin.
Effectiveness of health management departments of universities that train health managers in Turkey.
Karagoz, Sevgul; Balci, Ali
2007-01-01
This research has [corrected] aimed to examine the effectiveness of the health management departments of universities which [corrected] train health managers in Turkey. The study compares - for lecturers and students - nine variables of organisational effectiveness [corrected] These nine dimensions are derived from Cameron (1978; 1981; 1986) [corrected] Factor analysis was used to validate [corrected] the scale developed by the researcher. For internal consistency and reliability, the [corrected] Cronbach Alpha reliability coefficient and item total correlation were applied. A questionnaire was administered to a [corrected] total of [corrected] 207 people [corrected] in health management departments in [corrected]Turkey. In analysis of the data, [corrected] descriptive statistics and the [corrected] t-test were [corrected]used. According to our [corrected] research findings, at individual [corrected] university level, lecturers found their departments more effective than did [corrected] their students. The highest effectiveness was perceived at Baskent University, a private university [corrected] The best outcome was achieved for 'organisational health', and 'the [corrected] ability to acquire resources' achieved [corrected] the lowest outcome [corrected] Effectiveness overall [corrected] was found to be moderate [corrected] Copyright (c) 2006 John Wiley & Sons, Ltd.
Non-Gaussian microwave background fluctuations from nonlinear gravitational effects
NASA Technical Reports Server (NTRS)
Salopek, D. S.; Kunstatter, G. (Editor)
1991-01-01
Whether the statistics of primordial fluctuations for structure formation are Gaussian or otherwise may be determined if the Cosmic Background Explorer (COBE) Satellite makes a detection of the cosmic microwave-background temperature anisotropy delta T(sub CMB)/T(sub CMB). Non-Gaussian fluctuations may be generated in the chaotic inflationary model if two scalar fields interact nonlinearly with gravity. Theoretical contour maps are calculated for the resulting Sachs-Wolfe temperature fluctuations at large angular scales (greater than 3 degrees). In the long-wavelength approximation, one can confidently determine the nonlinear evolution of quantum noise with gravity during the inflationary epoch because: (1) different spatial points are no longer in causal contact; and (2) quantum gravity corrections are typically small-- it is sufficient to model the system using classical random fields. If the potential for two scalar fields V(phi sub 1, phi sub 2) possesses a sharp feature, then non-Gaussian fluctuations may arise. An explicit model is given where cold spots in delta T(sub CMB)/T(sub CMB) maps are suppressed as compared to the Gaussian case. The fluctuations are essentially scale-invariant.
SHAPE Selection (SHAPES) enrich for RNA structure signal in SHAPE sequencing-based probing data
Poulsen, Line Dahl; Kielpinski, Lukasz Jan; Salama, Sofie R.; Krogh, Anders; Vinther, Jeppe
2015-01-01
Selective 2′ Hydroxyl Acylation analyzed by Primer Extension (SHAPE) is an accurate method for probing of RNA secondary structure. In existing SHAPE methods, the SHAPE probing signal is normalized to a no-reagent control to correct for the background caused by premature termination of the reverse transcriptase. Here, we introduce a SHAPE Selection (SHAPES) reagent, N-propanone isatoic anhydride (NPIA), which retains the ability of SHAPE reagents to accurately probe RNA structure, but also allows covalent coupling between the SHAPES reagent and a biotin molecule. We demonstrate that SHAPES-based selection of cDNA–RNA hybrids on streptavidin beads effectively removes the large majority of background signal present in SHAPE probing data and that sequencing-based SHAPES data contain the same amount of RNA structure data as regular sequencing-based SHAPE data obtained through normalization to a no-reagent control. Moreover, the selection efficiently enriches for probed RNAs, suggesting that the SHAPES strategy will be useful for applications with high-background and low-probing signal such as in vivo RNA structure probing. PMID:25805860
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Linguistic Factors Influencing Speech Audiometric Assessment
Krijger, Stefanie; Meeuws, Matthias; De Ceulaer, Geert
2016-01-01
In speech audiometric testing, hearing performance is typically measured by calculating the number of correct repetitions of a speech stimulus. We investigate to what extent the repetition accuracy of Dutch speech stimuli presented against a background noise is influenced by nonauditory processes. We show that variation in verbal repetition accuracy is partially explained by morpholexical and syntactic features of the target language. Verbs, prepositions, conjunctions, determiners, and pronouns yield significantly lower correct repetitions than nouns, adjectives, or adverbs. The reduced repetition performance for verbs and function words is probably best explained by the similarities in the perceptual nature of verbal morphology and function words in Dutch. For sentences, an overall negative effect of syntactic complexity on speech repetition accuracy was found. The lowest number of correct repetitions was obtained with passive sentences, reflecting the cognitive cost of processing a noncanonical sentence structure. Taken together, these findings may have important implications for the audiological practice. In combination with hearing loss, linguistic complexity may increase the cognitive demands to process sentences in noise, leading to suboptimal functional hearing in day-to-day listening situations. Using test sentences with varying degrees of syntactic complexity may therefore provide useful information to measure functional hearing benefits. PMID:27830152
Nowotny, Kathryn M.
2014-01-01
This study examines race/ethnic disparities in treatment for drug dependent inmates in state correctional facilities. The data come from the 2004 Survey of Inmates in State Correctional Facilities. Fixed effects logistic regression is used to analyze treatment outcomes for 5,180 inmates housed within 286 prisons. The analysis accounts for differences in background characteristics (i.e., age, gender, marital status, foreign born status, veteran status), socioeconomic characteristics (i.e., education, employment prior to incarceration), mental health (i.e., diagnosis with a serious mental illness), and incarceration experiences (i.e., current conviction, previous incarceration episodes, time served, additional sentencing requirements, external social support, disciplinary violations). The findings identify a remarkable unmet need among drug dependent inmates in that less than one-half of drug dependent inmates had received any type of treatment in prison at the time of the interview with the most common treatment type being self-help groups. Compared to whites, drug dependent Latino inmates have significantly lower odds of utilizing treatment, yet there are no significant black-white disparities found. Implications for drug treatment within prisons are discussed. PMID:25270722
Moazzami, Zeinab; Dehdari, Tahere; Taghdisi, Mohammad Hosein; Soltanian, Alireza
2016-01-01
Background: One of the preventive strategies for chronic low back pain among operating room nurses is instructing proper body mechanics and postural behavior, for which the use of the Transtheoretical Model (TTM) has been recommended. Methods: Eighty two nurses who were in the contemplation and preparation stages for adopting correct body posture were randomly selected (control group = 40, intervention group = 42). TTM variables and body posture were measured at baseline and again after 1 and 6 months after the intervention. A four-week ergonomics educational intervention based on TTM variables was designed and conducted for the nurses in the intervention group. Results: Following the intervention, a higher proportion of nurses in the intervention group moved into the action stage (p < 0.05). Mean scores of self-efficacy, pros, experimental processes and correct body posture were also significantly higher in the intervention group (p < 0.05). No significant differences were found in the cons and behavioral processes, except for self-liberation, between the two groups (p > 0.05) after the intervention. Conclusions: The TTM provides a suitable framework for developing stage-based ergonomics interventions for postural behavior. PMID:26925897
Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction
Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang
2016-01-01
Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638
No association between oxytocin or prolactin gene variants and childhood-onset mood disorders
Strauss, John S.; Freeman, Natalie L.; Shaikh, Sajid A.; Vetró, Ágnes; Kiss, Enikő; Kapornai, Krisztina; Daróczi, Gabriella; Rimay, Timea; Kothencné, Viola Osváth; Dombovári, Edit; Kaczvinszk, Emília; Tamás, Zsuzsa; Baji, Ildikó; Besny, Márta; Gádoros, Julia; DeLuca, Vincenzo; George, Charles J.; Dempster, Emma; Barr, Cathy L.; Kovacs, Maria; Kennedy, James L.
2010-01-01
Background Oxytocin (OXT) and prolactin (PRL) are neuropeptide hormones that interact with the serotonin system and are involved in the stress response and social affiliation. In human studies, serum OXT and PRL levels have been associated with depression and related phenotypes. Our purpose was to determine if single nucleotide polymorphisms (SNPs) at the loci for OXT, PRL and their receptors, OXTR and PRLR, were associated with childhood-onset mood disorders (COMD). Methods Using 678 families in a family-based association design, we genotyped sixteen SNPs at OXT, PRL, OXTR and PRLR to test for association with COMD. Results No significant associations were found for SNPs in the OXTR, PRL, or PRLR genes. Two of three SNPs 3' of the OXT gene were associated with COMD (p ≤ 0.02), significant after spectral decomposition, but were not significant after additionally correcting for the number of genes tested. Supplementary analyses of parent-of-origin and proband sex effects for OXT SNPs by Fisher’s Exact test were not significant after Bonferroni correction. Conclusions We have examined sixteen OXT and PRL system gene variants, with no evidence of statistically significant association after correction for multiple tests. PMID:20547007
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.
Scalar Contribution to the Graviton Self-Energy During Inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Sohyun
2012-01-01
We use dimensional regularization to evaluate the one loop contribution to the graviton self-energy from a massless, minimally coupled scalar on a locally de Sitter background. For noncoincident points our result agrees with the stress tensor correlators obtained recently by Perez-Nadal, Roura and Verdaguer. We absorb the ultraviolet divergences using the R 2 and C 2 counterterms first derived by ’t Hooft and Veltman, and we take the D = 4 limit of the finite remainder. The renormalized result is expressed as the sum of two transverse, 4th order differential operators acting on nonlocal, de Sitter invariant structure functions. Inmore » this form it can be used to quantum-correct the linearized Einstein equations so that one can study how the inflationary production of infrared scalars affects the propagation of dynamical gravitons and the force of gravity. We have seen that they have no effect on the propagation of dynamical gravitons. Our computation motivates a conjecture for the first correction to the vacuum state wave functional of gravitons. We comment as well on performing the same analysis for the more interesting contribution from inflationary gravitons, and on inferring one loop corrections to the force of gravity.« less
Primordial power spectrum features and consequences
NASA Astrophysics Data System (ADS)
Goswami, G.
2014-03-01
The present Cosmic Microwave Background (CMB) temperature and polarization anisotropy data is consistent with not only a power law scalar primordial power spectrum (PPS) with a small running but also with the scalar PPS having very sharp features. This has motivated inflationary models with such sharp features. Recently, even the possibility of having nulls in the power spectrum (at certain scales) has been considered. The existence of these nulls has been shown in linear perturbation theory. What shall be the effect of higher order corrections on such nulls? Inspired by this question, we have attempted to calculate quantum radiative corrections to the Fourier transform of the 2-point function in a toy field theory and address the issue of how these corrections to the power spectrum behave in models in which the tree-level power spectrum has a sharp dip (but not a null). In particular, we have considered the possibility of the relative enhancement of radiative corrections in a model in which the tree-level spectrum goes through a dip in power at a certain scale. The mode functions of the field (whose power spectrum is to be evaluated) are chosen such that they undergo the kind of dynamics that leads to a sharp dip in the tree level power spectrum. Next, we have considered the situation in which this field has quartic self interactions, and found one loop correction in a suitably chosen renormalization scheme. Thus, we have attempted to answer the following key question in the context of this toy model (which is as important in the realistic case): In the chosen renormalization scheme, can quantum radiative corrections be enhanced relative to tree-level power spectrum at scales, at which sharp dips appear in the tree-level spectrum?
Concentrating Solar Power Projects - Holaniku at Keahole Point |
: Currently Non-Operational Start Year: 2009 Do you have more information, corrections, or comments ? Background Technology: Parabolic trough Status: Currently Non-Operational Country: United States City
Magnusson, P; Bäck, S A; Olsson, L E
1999-11-01
MR image nonuniformity can vary significantly with the spin-echo pulse sequence repetition time. When MR images with different nonuniformity shapes are used in a T1-calculation the resulting T1-image becomes nonuniform. As shown in this work the uniformity TR-dependence of the spin-echo pulse sequence is a critical property for T1 measurements in general and for ferrous sulfate dosimeter gel (FeGel) applications in particular. The purpose was to study the characteristics of the MR image plane nonuniformity in FeGel evaluation. This included studies of the possibility of decreasing nonuniformities by selecting uniformity optimized repetition times, studies of the transmitted and received RF-fields and studies of the effectiveness of the correction methods background subtraction and quotient correction. A pronounced MR image nonuniformity variation with repetition and T1 relaxation time was observed, and was found to originate from nonuniform RF-transmission in combination with the inherent differences in T1 relaxation for different repetition times. The T1 calculation itself, the uniformity optimized repetition times, nor none of the correction methods studied could sufficiently correct the nonuniformities observed in the T1 images. The nonuniformities were found to vary considerably less with inversion time for the inversion-recovery pulse sequence, than with repetition time for the spin-echo pulse sequence, resulting in considerably lower T1 image nonuniformity levels.
Does Human Milk Modulate Body Composition in Late Preterm Infants at Term-Corrected Age?
Giannì, Maria Lorella; Consonni, Dario; Liotto, Nadia; Roggero, Paola; Morlacchi, Laura; Piemontese, Pasqua; Menis, Camilla; Mosca, Fabio
2016-10-23
(1) Background: Late preterm infants account for the majority of preterm births and are at risk of altered body composition. Because body composition modulates later health outcomes and human milk is recommended as the normal method for infant feeding, we sought to investigate whether human milk feeding in early life can modulate body composition development in late preterm infants; (2) Methods: Neonatal, anthropometric and feeding data of 284 late preterm infants were collected. Body composition was evaluated at term-corrected age by air displacement plethysmography. The effect of human milk feeding on fat-free mass and fat mass content was evaluated using multiple linear regression analysis; (3) Results: Human milk was fed to 68% of the infants. According to multiple regression analysis, being fed any human milk at discharge and at term-corrected and being fed exclusively human milk at term-corrected age were positively associated with fat-free mass content(β = -47.9, 95% confidence interval (CI) = -95.7; -0.18; p = 0.049; β = -89.6, 95% CI = -131.5; -47.7; p < 0.0001; β = -104.1, 95% CI = -151.4; -56.7, p < 0.0001); (4) Conclusion: Human milk feeding appears to be associated with fat-free mass deposition in late preterm infants. Healthcare professionals should direct efforts toward promoting and supporting breastfeeding in these vulnerable infants.
XAP, a program for deconvolution and analysis of complex X-ray spectra
Quick, James E.; Haleby, Abdul Malik
1989-01-01
The X-ray analysis program (XAP) is a spectral-deconvolution program written in BASIC and specifically designed to analyze complex spectra produced by energy-dispersive X-ray analytical systems (EDS). XAP compensates for spectrometer drift, utilizes digital filtering to remove background from spectra, and solves for element abundances by least-squares, multiple-regression analysis. Rather than base analyses on only a few channels, broad spectral regions of a sample are reconstructed from standard reference spectra. The effects of this approach are (1) elimination of tedious spectrometer adjustments, (2) removal of background independent of sample composition, and (3) automatic correction for peak overlaps. Although the program was written specifically to operate a KEVEX 7000 X-ray fluorescence analytical system, it could be adapted (with minor modifications) to analyze spectra produced by scanning electron microscopes, electron microprobes, and probes, and X-ray defractometer patterns obtained from whole-rock powders.
NASA Astrophysics Data System (ADS)
Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi
2016-06-01
A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.
Normalization, bias correction, and peak calling for ChIP-seq
Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.
2012-01-01
Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706
Retrieval of background surface reflectance with BRD components from pre-running BRDF
NASA Astrophysics Data System (ADS)
Choi, Sungwon; Lee, Kyeong-Sang; Jin, Donghyun; Lee, Darae; Han, Kyung-Soo
2016-10-01
Many countries try to launch satellite to observe the Earth surface. As important of surface remote sensing is increased, the reflectance of surface is a core parameter of the ground climate. But observing the reflectance of surface by satellite have weakness such as temporal resolution and being affected by view or solar angles. The bidirectional effects of the surface reflectance may make many noises to the time series. These noises can lead to make errors when determining surface reflectance. To correct bidirectional error of surface reflectance, using correction model for normalized the sensor data is necessary. A Bidirectional Reflectance Distribution Function (BRDF) is making accuracy higher method to correct scattering (Isotropic scattering, Geometric scattering, Volumetric scattering). To correct bidirectional error of surface reflectance, BRDF was used in this study. To correct bidirectional error of surface reflectance, we apply Bidirectional Reflectance Distribution Function (BRDF) to retrieve surface reflectance. And we apply 2 steps for retrieving Background Surface Reflectance (BSR). The first step is retrieving Bidirectional Reflectance Distribution (BRD) coefficients. Before retrieving BSR, we did pre-running BRDF to retrieve BRD coefficients to correct scatterings (Isotropic scattering, Geometric scattering, Volumetric scattering). In pre-running BRDF, we apply BRDF with observed surface reflectance of SPOT/VEGETATION (VGT-S1) and angular data to get BRD coefficients for calculating scattering. After that, we apply BRDF again in the opposite direction with BRD coefficients and angular data to retrieve BSR as a second step. As a result, BSR has very similar reflectance to one of VGT-S1. And reflectance in BSR is shown adequate. The highest reflectance of BSR is not over 0.4μm in blue channel, 0.45μm in red channel, 0.55μm in NIR channel. And for validation we compare reflectance of clear sky pixel from SPOT/VGT status map data. As a result of comparing BSR with VGT-S1, bias is from 0.0116 to 0.0158 and RMSE is from 0.0459 to 0.0545. They are very reasonable results, so we confirm that BSR is similar to VGT-S1. And weakness of this study is missing pixel in BSR which are observed less time to retrieve BRD components. If missing pixels are filled, BSR is better to retrieve surface products with more accuracy. And we think that after filling the missing pixel and being more accurate, it can be useful data to retrieve surface product which made by surface reflectance like cloud masking and retrieving aerosol.
Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
Willis, Matt; Sun, Peiyuan; Wang, Jun
2013-01-01
Background Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the “turkers”. Objective To answer two research questions: (1) Is the turkers’ collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers’ demographic characteristics affect their performance in medical pictogram comprehension? Methods We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers’ guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers’ interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers’ demographic characteristics and their pictogram comprehension performance. Results The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response–based open-ended testing with local people. The turkers’ misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. Conclusions The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers’ misunderstandings overlap with those elicited from low-literate people. PMID:23732572
2014-03-27
14 Mar 2014 David J. Bunker, Ph.D. (Chairman) Date ____________//signed//_________________ 14 Mar 2014 Tay W. Johannes, Ph.D...Lt Col, USAF (Member) Date ____________//signed//_________________ 12 Mar 2014 Benjamin R. Kowash, Ph.D., Maj, USAF (Member) Date AFIT-ENP...by Test Date ........................ 28 Figure 3: Comparison of background spectra from 6 October (blue) and 16 September (green
Impact of Aerosols on Scene Collection and Scene Correction
2009-03-01
the atmosphere on the way to the satellite. In order for a satellite- borne sensor to distinguish a target from its background, the difference between...the target and background top of the atmosphere radiance ( TLΔ ) must be greater than the sensor radiance sensitivity ( sLΔ ). The difference ...northwesterly, with prevailing surface visibilities between four and seven miles in dust, sand, or haze. Stronger flow over northern Saudi Arabia can loft
Electromagnetic fields with vanishing quantum corrections
NASA Astrophysics Data System (ADS)
Ortaggio, Marcello; Pravda, Vojtěch
2018-04-01
We show that a large class of null electromagnetic fields are immune to any modifications of Maxwell's equations in the form of arbitrary powers and derivatives of the field strength. These are thus exact solutions to virtually any generalized classical electrodynamics containing both non-linear terms and higher derivatives, including, e.g., non-linear electrodynamics as well as QED- and string-motivated effective theories. This result holds not only in a flat or (anti-)de Sitter background, but also in a larger subset of Kundt spacetimes, which allow for the presence of aligned gravitational waves and pure radiation.
NASA Technical Reports Server (NTRS)
Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.
1999-01-01
Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.
Precise predictions for V+jets dark matter backgrounds
NASA Astrophysics Data System (ADS)
Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.
2017-12-01
High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.
Richetto, Juliet; Labouesse, Marie A.; Poe, Michael M.; Cook, James M.; Grace, Anthony A.; Riva, Marco A.
2015-01-01
Background: Impaired γ-aminobutyric acid (GABA) signaling may contribute to the emergence of cognitive deficits and subcortical dopaminergic hyperactivity in patients with schizophrenia and related psychotic disorders. Against this background, it has been proposed that pharmacological interventions targeting GABAergic dysfunctions may prove useful in correcting such cognitive impairments and dopaminergic imbalances. Methods: Here, we explored possible beneficial effects of the benzodiazepine-positive allosteric modulator SH-053-2’F-S-CH3, with partial selectivity at the α2, α3, and α5 subunits of the GABAA receptor in an immune-mediated neurodevelopmental disruption model. The model is based on prenatal administration of the viral mimetic polyriboinosinic-polyribocytidilic acid [poly(I:C)] in mice, which is known to capture various GABAergic, dopamine-related, and cognitive abnormalities implicated in schizophrenia and related disorders. Results: Real-time polymerase chain reaction analyses confirmed the expected alterations in GABAA receptor α subunit gene expression in the medial prefrontal cortices and ventral hippocampi of adult poly(I:C) offspring relative to control offspring. Systemic administration of SH-053-2’F-S-CH3 failed to normalize the poly(I:C)-induced deficits in working memory and social interaction, but instead impaired performance in these cognitive and behavioral domains both in control and poly(I:C) offspring. In contrast, SH-053-2’F-S-CH3 was highly effective in mitigating the poly(I:C)-induced amphetamine hypersensitivity phenotype without causing side effects in control offspring. Conclusions: Our preclinical data suggest that benzodiazepine-like positive allosteric modulators with activity at the α2, α3, and α5 subunits of the GABAA receptor may be particularly useful in correcting pathological overactivity of the dopaminergic system, but they may be ineffective in targeting multiple pathological domains that involve the co-existence of psychotic, social, and cognitive dysfunctions. PMID:25636893
Concentrating Solar Power Projects - Solaben 6 | Concentrating Solar Power
: Operational Start Year: 2013 Do you have more information, corrections, or comments? Background Technology MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: August 2013
Concentrating Solar Power Projects - Waad Al Shamal ISCC Plant |
construction Start Year: 2018 Do you have more information, corrections, or comments? Background Technology Solar Start Production: 2018 Participants Developer(s): General Electric Plant Configuration Solar Field
Concentrating Solar Power Projects - Solaben 1 | Concentrating Solar Power
: Operational Start Year: 2013 Do you have more information, corrections, or comments? Background Technology MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: August 2013
Concentrating Solar Power Projects - Greenway CSP Mersin Tower Plant |
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background CSP Start Production: 2012 Project Type: Demonstration Participants Developer(s): Greenway CSP Owner(s
Statistical significance versus clinical relevance.
van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G
2017-04-01
In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Accurate Measurement of Small Airways on Low-Dose Thoracic CT Scans in Smokers
Conradi, Susan H.; Atkinson, Jeffrey J.; Zheng, Jie; Schechtman, Kenneth B.; Senior, Robert M.; Gierada, David S.
2013-01-01
Background: Partial volume averaging and tilt relative to the scan plane on transverse images limit the accuracy of airway wall thickness measurements on CT scan, confounding assessment of the relationship between airway remodeling and clinical status in COPD. The purpose of this study was to assess the effect of partial volume averaging and tilt corrections on airway wall thickness measurement accuracy and on relationships between airway wall thickening and clinical status in COPD. Methods: Airway wall thickness measurements in 80 heavy smokers were obtained on transverse images from low-dose CT scan using the open-source program Airway Inspector. Measurements were corrected for partial volume averaging and tilt effects using an attenuation- and geometry-based algorithm and compared with functional status. Results: The algorithm reduced wall thickness measurements of smaller airways to a greater degree than larger airways, increasing the overall range. When restricted to analyses of airways with an inner diameter < 3.0 mm, for a theoretical airway of 2.0 mm inner diameter, the wall thickness decreased from 1.07 ± 0.07 to 0.29 ± 0.10 mm, and the square root of the wall area decreased from 3.34 ± 0.15 to 1.58 ± 0.29 mm, comparable to histologic measurement studies. Corrected measurements had higher correlation with FEV1, differed more between BMI, airflow obstruction, dyspnea, and exercise capacity (BODE) index scores, and explained a greater proportion of FEV1 variability in multivariate models. Conclusions: Correcting for partial volume averaging improves accuracy of airway wall thickness estimation, allowing direct measurement of the small airways to better define their role in COPD. PMID:23172175
Nishikawa, Tomofumi; Okamura, Tomonori; Nakayama, Hirofumi; Miyamatsu, Naomi; Morimoto, Akiko; Toyoda, Kazunori; Suzuki, Kazuo; Toyota, Akihiro; Hata, Takashi; Yamaguchi, Takenori
2016-01-01
Background An immediate ambulance call offers the greatest opportunity for acute stroke therapy. Effectively using ambulance services requires strengthening the association between knowledge of early stroke symptoms and intention to call an ambulance at stroke onset, and encouraging the public to use ambulance services. Methods The present study utilized data from the Acquisition of Stroke Knowledge (ASK) study, which administered multiple-choice, mail-in surveys regarding awareness of early stroke symptoms and response to a stroke attack before and after a 2-year stroke education campaign in two areas subject to intensive and moderate intervention, as well as in a control area, in Japan. In these three areas, 3833 individuals (1680, 1088 and 1065 participants in intensive intervention, moderate intervention, and control areas, respectively), aged 40 to 74 years, who responded appropriately to each survey were included in the present study. Results After the intervention, the number of correctly identified symptoms significantly associated with intention to call an ambulance (P < 0.05) increased (eg, from 4 to 5 correctly identified symptoms), without increasing choice of decoy symptoms in the intensive intervention area. Meanwhile, in other areas, rate of identification of not only correct symptoms but also decoy symptoms associated with intention to call an ambulance increased. Furthermore, the association between improvement in the knowledge of stroke symptoms and intention to call an ambulance was observed only in the intensive intervention area (P = 0.009). Conclusions Our results indicate that intensive interventions are useful for strengthening the association between correct knowledge of early stroke symptoms and intention to call an ambulance, without strengthening the association between incorrect knowledge and intention to call an ambulance. PMID:26441211
[Efficiency evaluation of capsaicinoids to discriminate bio-waste oils from edible vegetable oils].
Mao, Lisha; Liu, Honghe; Kang, Li; Jiang, Jie; Liao, Shicheng; Liu, Guihua; Deng, Pingjian
2014-07-01
To evaluate the efficiency of capsaicinoids to discriminate bio-waste oil from edible vegetable oil. 14 raw vegetable oils, 24 fried waste oils, 34 kitchen-waste oils, 32 edible non-peanut vegetable oil, 32 edible peanuts oil, 16 edible oil add flavorand and 11 refined bio-waste oils were prepared and examined for capsaicinoids including capsaicin, dihydrocapsaicin and nonylic acid vanillylamide. The detection results of the above samples were statistically tested based on sample category to assessment identify the effectiveness of the bio-waste oils with capsaicinoids. As a indicator, capsaincin was possessed of high detection sensitivity and has the highest efficiency to discern kitchen-waste oils and refined bio-waste oils samples from edible non-peanut vegetable oil correctly. The accuracy rate of identification were 100% and 90.1% respectively. There is the background in peanut oil. CONCLUSION Capsaicin added in cooking process can be retained in the refining process and hardly be removed in the refining process. In the case of fully eliminating the background interference, capsaicinoids can effectively identify bio-waste oils and edible vegetable oil in combination.
A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.
2011-11-02
Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less
Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary
2015-06-30
Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase in G to A substitutions, but found no evidence for this host defense strategy. Our error correction approach for minor allele frequencies (more sensitive and computationally efficient than other algorithms) and our statistical treatment of variation (ANOVA) were critical for effective use of high-throughput sequencing data in understanding viral diversity. We found that co-infection with PLV shifts FIV diversity from bone marrow to lymph node and spleen.
Average luminosity distance in inhomogeneous universes
NASA Astrophysics Data System (ADS)
Kostov, Valentin Angelov
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
2010-01-01
Background This paper addresses the statistical use of accessibility and availability indices and the effect of study boundaries on these measures. The measures are evaluated via an extensive simulation based on cluster models for local outlet density. We define outlet to mean either food retail store (convenience store, supermarket, gas station) or restaurant (limited service or full service restaurants). We designed a simulation whereby a cluster outlet model is assumed in a large study window and an internal subset of that window is constructed. We performed simulations on various criteria including one scenario representing an urban area with 2000 outlets as well as a non-urban area simulated with only 300 outlets. A comparison is made between estimates obtained with the full study area and estimates using only the subset area. This allows the study of the effect of edge censoring on accessibility measures. Results The results suggest that considerable bias is found at the edges of study regions in particular for accessibility measures. Edge effects are smaller for availability measures (when not smoothed) and also for short range accessibility Conclusions It is recommended that any study utilizing these measures should correct for edge effects. The use of edge correction via guard areas is recommended and the avoidance of large range distance-based accessibility measures is also proposed. PMID:20663199
2013-01-01
Background In adult correctional facilities, correctional officers (COs) are responsible for the safety and security of the facility in addition to aiding in offender rehabilitation and preventing recidivism. COs experience higher rates of job stress and burnout that stem from organizational stressors, leading to negative outcomes for not only the CO but the organization as well. Effective interventions could aim at targeting organizational stressors in order to reduce these negative outcomes as well as COs’ job stress and burnout. This paper fills a gap in the organizational stress literature among COs by systematically reviewing the relationship between organizational stressors and CO stress and burnout in adult correctional facilities. In doing so, the present review identifies areas that organizational interventions can target in order to reduce CO job stress and burnout. Methods A systematic search of the literature was conducted using Medline, PsycINFO, Criminal Justice Abstracts, and Sociological Abstracts. All retrieved articles were independently screened based on criteria developed a priori. All included articles underwent quality assessment. Organizational stressors were categorized according to Cooper and Marshall’s (1976) model of job stress. Results The systematic review yielded 8 studies that met all inclusion and quality assessment criteria. The five categories of organizational stressors among correctional officers are: stressors intrinsic to the job, role in the organization, rewards at work, supervisory relationships at work and the organizational structure and climate. The organizational structure and climate was demonstrated to have the most consistent relationship with CO job stress and burnout. Conclusions The results of this review indicate that the organizational structure and climate of correctional institutions has the most consistent relationship with COs’ job stress and burnout. Limitations of the studies reviewed include the cross-sectional design and the use of varying measures for organizational stressors. The results of this review indicate that interventions should aim to improve the organizational structure and climate of the correctional facility by improving communication between management and COs. PMID:23356379
Concentrating Solar Power Projects - Liddell Power Station | Concentrating
: Linear Fresnel reflector Turbine Capacity: Net: 3.0 MW Gross: 3.0 MW Status: Currently Non-Operational Start Year: 2012 Do you have more information, corrections, or comments? Background Technology: Linear
76 FR 42732 - Importer of Controlled Substances; Notice of Registration
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... the Correction to Notice of Application pertaining to Rhodes Technologies, 72 FR 2417 (2007), comments... state and local laws, and a review of the company's background and history. Therefore, pursuant to 21 U...
Concentrating Solar Power Projects - Orellana | Concentrating Solar Power |
: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background Technology (Estimated) Contact(s): SolarPACES Start Production: August 2012 Cost (approx): 240,000,000 Euro PPA/Tariff
Concentrating Solar Power Projects - Solacor 2 | Concentrating Solar Power
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background : 100,000 MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: March 9
Concentrating Solar Power Projects - Aurora Solar Energy Project |
development Start Year: 2020 Do you have more information, corrections, or comments? Background Technology : 495,000 MWh/yr (Expected) Contact(s): Webmaster Solar Key References: Fact sheet Break Ground: 2018 Start
Concentrating Solar Power Projects - MINOS | Concentrating Solar Power |
development Start Year: 2020 Do you have more information, corrections, or comments? Background Technology ): Alex Phocas-Cosmetatos Company: Nur Energie Start Production: 2020 PPA/Tariff Type: Feed-In Tariff PPA
Concentrating Solar Power Projects - Shagaya CSP Project | Concentrating
construction Start Year: 2018 Do you have more information, corrections, or comments? Background Technology : 180,000 MWh/yr Contact(s): Webmaster Solar Start Production: 2018 Cost (approx): 385 US$ million PPA
Concentrating Solar Power Projects - Solaben 2 | Concentrating Solar Power
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background : 100,000 MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: October
Concentrating Solar Power Projects - Solacor 1 | Concentrating Solar Power
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background : 100,000 MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: February
Concentrating Solar Power Projects - Sundrop CSP Project | Concentrating
Start Year: 2016 Do you have more information, corrections, or comments? Background Technology: Power Ground: October 12, 2015 Start Production: October 6, 2016 Participants Developer(s): Aalborg CSP Owner(s
Concentrating Solar Power Projects - Solaben 3 | Concentrating Solar Power
Status: Operational Start Year: 2012 Do you have more information, corrections, or comments? Background : 100,000 MWh/yr (Estimated) Contact(s): Allison Lenthall Company: Abengoa Solar Start Production: June
A robust method for removal of glint effects from satellite ocean colour imagery
NASA Astrophysics Data System (ADS)
Singh, R. K.; Shanmugam, P.
2014-12-01
Removal of the glint effects from satellite imagery for accurate retrieval of water-leaving radiances is a complicated problem since its contribution in the measured signal is dependent on many factors such as viewing geometry, sun elevation and azimuth, illumination conditions, wind speed and direction, and the water refractive index. To simplify the situation, existing glint correction models describe the extent of the glint-contaminated region and its contribution to the radiance essentially as a function of the wind speed and sea surface slope that often lead to a tremendous loss of information with a considerable scientific and financial impact. Even with the glint-tilting capability of modern sensors, glint contamination is severe on the satellite-derived ocean colour products in the equatorial and sub-tropical regions. To rescue a significant portion of data presently discarded as "glint contaminated" and improving the accuracy of water-leaving radiances in the glint contaminated regions, we developed a glint correction algorithm which is dependent only on the satellite derived Rayleigh Corrected Radiance and absorption by clear waters. The new algorithm is capable of achieving meaningful retrievals of ocean radiances from the glint-contaminated pixels unless saturated by strong glint in any of the wavebands. It takes into consideration the combination of the background absorption of radiance by water and the spectral glint function, to accurately minimize the glint contamination effects and produce robust ocean colour products. The new algorithm is implemented along with an aerosol correction method and its performance is demonstrated for many MODIS-Aqua images over the Arabian Sea, one of the regions that are heavily affected by sunglint due to their geographical location. The results with and without sunglint correction are compared indicating major improvements in the derived products with sunglint correction. When compared to the results of an existing model in the SeaDAS processing system, the new algorithm has the best performance in terms of yielding physically realistic water-leaving radiance spectra and improving the accuracy of the ocean colour products. Validation of MODIS-Aqua derived water-leaving radiances with in-situ data also corroborates the above results. Unlike the standard models, the new algorithm performs well in variable illumination and wind conditions and does not require any auxiliary data besides the Rayleigh-corrected radiance itself. Exploitation of signals observed by sensors looking within regions affected by bright white sunglint is possible with the present algorithm when the requirement of a stable response over a wide dynamical range for these sensors is fulfilled.
The 2-d CCD Data Reduction Cookbook
NASA Astrophysics Data System (ADS)
Davenhall, A. C.; Privett, G. J.; Taylor, M. B.
This cookbook presents simple recipes and scripts for reducing direct images acquired with optical CCD detectors. Using these recipes and scripts you can correct un-processed images obtained from CCDs for various instrumental effects to retrieve an accurate picture of the field of sky observed. The recipes and scripts use standard software available at all Starlink sites. The topics covered include: creating and applying bias and flat-field corrections, registering frames and creating a stack or mosaic of registered frames. Related auxiliary tasks, such as converting between different data formats, displaying images and calculating image statistics are also presented. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual reduction of observations. Additional material outlines some of the differences between using conventional optical CCDs and the similar arrays used to observe at infrared wavelengths.
Electroconvulsive therapy use in adolescents: a systematic review
2013-01-01
Background Considered as a moment of psychological vulnerability, adolescence is remarkably a risky period for the development of psychopathologies, when the choice of the correct therapeutic approach is crucial for achieving remission. One of the researched therapies in this case is electroconvulsive therapy (ECT). The present study reviews the recent and classical aspects regarding ECT use in adolescents. Methods Systematic review, performed in November 2012, conformed to the PRISMA statement. Results From the 212 retrieved articles, only 39 were included in the final sample. The reviewed studies bring indications of ECT use in adolescents, evaluate the efficiency of this therapy regarding remission, and explore the potential risks and complications of the procedure. Conclusions ECT use in adolescents is considered a highly efficient option for treating several psychiatric disorders, achieving high remission rates, and presenting few and relatively benign adverse effects. Risks can be mitigated by the correct use of the technique and are considered minimal when compared to the efficiency of ECT in treating psychopathologies. PMID:23718899
Pavement crack detection combining non-negative feature with fast LoG in complex scene
NASA Astrophysics Data System (ADS)
Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu
2015-12-01
Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.
An unbiased view of X-ray obscuration amongst active galactic nuclei with NuLANDS
NASA Astrophysics Data System (ADS)
Boorman, Peter Gregory; Gandhi, Poshak; Stern, Daniel; Harrison, Fiona; NuSTAR Obscured AGN Team
2018-01-01
Nearly all active galactic nuclei (AGN) are obscured in X-rays behind column densities of NH ≥ 1022 cm-2. Hard X-ray studies have proven very effective to quanitfy the levels of obscuration amongst AGN, up to and just above the Compton-thick limit (NH ˜ 1.5 × 1024 cm-2). However, Compton-thick sources with NH values beyond this limit are typically missed in hard X-ray all-sky surveys such as Swift/BAT, requiring many studies to apply considerable bias corrections to account for the loss. Incorrectly quanitfying the heavily obscured AGN population can have a dramatic effect on synthesis models designed to fit the Cosmic X-ray Background spectrum, due to their significant contribution to the peak flux of the background at ~30 keV. This is what motivated the NuSTAR Local AGN NH Distribution Survey (NuLANDS) - a NuSTAR 1 Ms legacy survey of an obscuration-independent, infrared selected sample of AGN, undetected by BAT and unobserved by NuSTAR before - a considerable number of which are predicted to be heavily obscured. NuSTAR is the first true X-ray focusing instrument capable of spectral analysis > 10 keV, and as such can and will place robust constraints on the NH values of these elusive AGN. In this poster, I will present the first results from NuLANDS, including multiple newly identified Compton-thick AGN, previously undetected in the Swift/BAT 70-month catalog. I will further highlight the exciting prospects for the complete NuLANDS sample, with the ultimate goal of constructing a representative NH distribution of AGN in the local Universe, requiring minimal bias corrections.
The intuitive use of laryngeal airway tools by first year medical students.
Bickenbach, Johannes; Schälte, Gereon; Beckers, Stefan; Fries, Michael; Derwall, Matthias; Rossaint, Rolf
2009-09-22
Providing a secured airway is of paramount importance in cardiopulmonary resuscitation. Although intubating the trachea is yet seen as gold standard, this technique is still reserved to experienced healthcare professionals. Compared to bag-valve facemask ventilation, however, the insertion of a laryngeal mask airway offers the opportunity to ventilate the patient effectively and can also be placed easily by lay responders. Obviously, it might be inserted without detailed background knowledge.The purpose of the study was to investigate the intuitive use of airway devices by first-year medical students as well as the effect of a simple, but well-directed training programme. Retention of skills was re-evaluated six months thereafter. The insertion of a LMA-Classic and a LMA-Fastrach performed by inexperienced medical students was compared in an airway model. The improvement on their performance after a training programme of overall two hours was examined afterwards. Prior to any instruction, mean time to correct placement was 55.5 +/- 29.6 s for the LMA-Classic and 38.1 +/- 24.9 s for the LMA-Fastrach. Following training, time to correct placement decreased significantly with 22.9 +/- 13.5 s for the LMA-Classic and 22.9 +/- 19.0 s for the LMA-Fastrach, respectively (p < 0.05). After six months, the results are comparable prior (55.6 +/- 29.9 vs 43.1 +/- 34.7 s) and after a further training period (23.5 +/- 13.2 vs 26.6 +/- 21.6, p < 0.05). Untrained laypersons are able to use different airway devices in a manikin and may therefore provide a secured airway even without having any detailed background knowledge about the tool. Minimal theoretical instruction and practical skill training can improve their performance significantly. However, refreshment of knowledge seems justified after six months.
Physics of pure and non-pure positron emitters for PET: a review and a discussion.
Conti, Maurizio; Eriksson, Lars
2016-12-01
With the increased interest in new PET tracers, gene-targeted therapy, immunoPET, and theranostics, other radioisotopes will be increasingly used in clinical PET scanners, in addition to (18)F. Some of the most interesting radioisotopes with prospective use in the new fields are not pure short-range β(+) emitters but can be associated with gamma emissions in coincidence with the annihilation radiation (prompt gamma), gamma-gamma cascades, intense Bremsstrahlung radiation, high-energy positrons that may escape out of the patient skin, and high-energy gamma rays that result in some e (+)/e (-) pair production. The high level of sophistication in data correction and excellent quantitative accuracy that has been reached for (18)F in recent years can be questioned by these effects. In this work, we review the physics and the scientific literature and evaluate the effect of these additional phenomena on the PET data for each of a series of radioisotopes: (11)C, (13)N, (15)O, (18)F, (64)Cu, (68)Ga, (76)Br, (82)Rb, (86)Y, (89)Zr, (90)Y, and (124)I. In particular, we discuss the present complications arising from the prompt gammas, and we review the scientific literature on prompt gamma correction. For some of the radioisotopes considered in this work, prompt gamma correction is definitely needed to assure acceptable image quality, and several approaches have been proposed in recent years. Bremsstrahlung photons and (176)Lu background were also evaluated.
Impact of cause of death adjudication on the results of the European prostate cancer screening trial
Walter, Stephen D; de Koning, Harry J; Hugosson, Jonas; Talala, Kirsi; Roobol, Monique J; Carlsson, Sigrid; Zappa, Marco; Nelen, Vera; Kwiatkowski, Maciej; Páez, Álvaro; Moss, Sue; Auvinen, Anssi
2017-01-01
Background: The European Randomised Study of Prostate Cancer Screening has shown a 21% relative reduction in prostate cancer mortality at 13 years. The causes of death can be misattributed, particularly in elderly men with multiple comorbidities, and therefore accurate assessment of the underlying cause of death is crucial for valid results. To address potential unreliability of end-point assessment, and its possible impact on mortality results, we analysed the study outcome adjudication data in six countries. Methods: Latent class statistical models were formulated to compare the accuracy of individual adjudicators, and to assess whether accuracy differed between the trial arms. We used the model to assess whether correcting for adjudication inaccuracies might modify the study results. Results: There was some heterogeneity in adjudication accuracy of causes of death, but no consistent differential accuracy by trial arm. Correcting the estimated screening effect for misclassification did not alter the estimated mortality effect of screening. Conclusions: Our findings were consistent with earlier reports on the European screening trial. Observer variation, while demonstrably present, is unlikely to have materially biased the main study results. A bias in assigning causes of death that might have explained the mortality reduction by screening can be effectively ruled out. PMID:27855442
Electroweak Sudakov Corrections to New Physics Searches at the LHC
NASA Astrophysics Data System (ADS)
Chiesa, Mauro; Montagna, Guido; Barzè, Luca; Moretti, Mauro; Nicrosini, Oreste; Piccinini, Fulvio; Tramontano, Francesco
2013-09-01
We compute the one-loop electroweak Sudakov corrections to the production process Z(νν¯)+n jets, with n=1, 2, 3, in pp collisions at the LHC. It represents the main irreducible background to new physics searches at the energy frontier. The results are obtained at the leading and next-to-leading logarithmic accuracy by implementing the general algorithm of Denner and Pozzorini in the event generator for multiparton processes alpgen. For the standard selection cuts used by the ATLAS and CMS Collaborations, we show that the Sudakov corrections to the relevant observables can grow up to -40% at s=14TeV. We also include the contribution due to undetected real radiation of massive gauge bosons, to show to what extent the partial cancellation with the large negative virtual corrections takes place in realistic event selections.
The Top-of-Instrument corrections for nuclei with AMS on the Space Station
NASA Astrophysics Data System (ADS)
Ferris, N. G.; Heil, M.
2018-05-01
The Alpha Magnetic Spectrometer (AMS) is a large acceptance, high precision magnetic spectrometer on the International Space Station (ISS). The top-of-instrument correction for nuclei flux measurements with AMS accounts for backgrounds due to the fragmentation of nuclei with higher charge. Upon entry in the detector, nuclei may interact with AMS materials and split into fragments of lower charge based on their cross-section. The redundancy of charge measurements along the particle trajectory with AMS allows for the determination of inelastic interactions and for the selection of high purity nuclei samples with small uncertainties. The top-of-instrument corrections for nuclei with 2 < Z ≤ 6 are presented.
Solar cell angle of incidence corrections
NASA Technical Reports Server (NTRS)
Burger, Dale R.; Mueller, Robert L.
1995-01-01
Literature on solar array angle of incidence corrections was found to be sparse and contained no tabular data for support. This lack along with recent data on 27 GaAs/Ge 4 cm by 4 cm cells initiated the analysis presented in this paper. The literature cites seven possible contributors to angle of incidence effects: cosine, optical front surface, edge, shadowing, UV degradation, particulate soiling, and background color. Only the first three are covered in this paper due to lack of sufficient data. The cosine correction is commonly used but is not sufficient when the incident angle is large. Fresnel reflection calculations require knowledge of the index of refraction of the coverglass front surface. The absolute index of refraction for the coverglass front surface was not known nor was it measured due to lack of funds. However, a value for the index of refraction was obtained by examining how the prediction errors varied with different assumed indices and selecting the best fit to the set of measured values. Corrections using front surface Fresnel reflection along with the cosine correction give very good predictive results when compared to measured data, except there is a definite trend away from predicted values at the larger incident angles. This trend could be related to edge effects and is illustrated by a use of a box plot of the errors and by plotting the deviation of the mean against incidence angle. The trend is for larger deviations at larger incidence angles and there may be a fourth order effect involved in the trend. A chi-squared test was used to determine if the measurement errors were normally distributed. At 10 degrees the chi-squared test failed, probably due to the very small numbers involved or a bias from the measurement procedure. All other angles showed a good fit to the normal distribution with increasing goodness-of-fit as the angles increased which reinforces the very small numbers hypothesis. The contributed data only went to 65 degrees from normal which prevented any firm conclusions about extreme angle effects although a trend in the right direction was seen. Measurement errors were estimated and found to be consistent with the conclusions that were drawn. A controlled experiment using coverglasses and cells from the same lots and extending to larger incidence angles would probably lead to further insight into the subject area.
Al Ben Ali, Abdulaziz; Kang, Kiho; Finkelman, Matthew D; Zandparsa, Roya; Hirayama, Hiroshi
2014-04-01
The purpose of this study was to compare the effect of variations in translucency and background on color differences (ΔE) for different shades of computer-aided design and computer-aided manufacturing (CAD/CAM) lithium disilicate glass ceramics. A pilot study suggested n = 10 as an appropriate sample size for the number of lithium disilicate glass ceramic cylinders per group. High-transparency (HT) and low-transparency (LT) cylinders (diameter, 12 mm; length, 13 mm) were fabricated in three ceramic shades (BL1, A2, C3) using CAD/CAM technology and were cut into specimen disks (thickness, 1.2 mm; diameter, 12 mm) for placement on Natural Die (ND1 and ND4) backgrounds. Four combinations of translucency and background color were evaluated in terms of color differences for the three ceramic shades: group 1 (HT ND1, reference), group 2 (HT ND4), group 3 (LT ND1), and group 4 (LT ND4). A spectrophotometer was used to measure the color differences. Nonparametric tests (Kruskal-Wallis tests) were used to evaluate the color differences among the tested groups, and Mann-Whitney U tests with Bonferroni correction were used as post hoc tests. Furthermore, for each ceramic shade, the HT groups were compared to the LT groups using the Mann-Whitney U test. Significant differences were present among the tested groups of the same ceramic shade (p < 0.001). The highest ΔE values were observed in the HT ND4 group for BL1, while the lowest ΔE values were found in the LT ND1 group for both A2 and C3. Further, the HT groups and the groups with a darker background (ND4) showed increased ΔE values compared with the other groups (p < 0.001). Within the limitations of this study, the results suggested that the translucency and background color significantly influenced the lithium disilicate glass ceramic color among the BL1, A2, and C3 ceramic shades. Changing the underlying color from a lighter background to a darker background resulted in increased color differences. © 2013 by the American College of Prosthodontists.
2012-01-01
Background Production of correctly disulfide bonded proteins to high yields remains a challenge. Recombinant protein expression in Escherichia coli is the popular choice, especially within the research community. While there is an ever growing demand for new expression strains, few strains are dedicated to post-translational modifications, such as disulfide bond formation. Thus, new protein expression strains must be engineered and the parameters involved in producing disulfide bonded proteins must be understood. Results We have engineered a new E. coli protein expression strain named SHuffle, dedicated to producing correctly disulfide bonded active proteins to high yields within its cytoplasm. This strain is based on the trxB gor suppressor strain SMG96 where its cytoplasmic reductive pathways have been diminished, allowing for the formation of disulfide bonds in the cytoplasm. We have further engineered a major improvement by integrating into its chromosome a signal sequenceless disulfide bond isomerase, DsbC. We probed the redox state of DsbC in the oxidizing cytoplasm and evaluated its role in assisting the formation of correctly folded multi-disulfide bonded proteins. We optimized protein expression conditions, varying temperature, induction conditions, strain background and the co-expression of various helper proteins. We found that temperature has the biggest impact on improving yields and that the E. coli B strain background of this strain was superior to the K12 version. We also discovered that auto-expression of substrate target proteins using this strain resulted in higher yields of active pure protein. Finally, we found that co-expression of mutant thioredoxins and PDI homologs improved yields of various substrate proteins. Conclusions This work is the first extensive characterization of the trxB gor suppressor strain. The results presented should help researchers design the appropriate protein expression conditions using SHuffle strains. PMID:22569138
Concentrating Solar Power Projects - Maricopa Solar Project | Concentrating
Turbine Capacity: Net: 1.5 MW Gross: 1.5 MW Status: Currently Non-Operational Start Year: 2010 Do you have more information, corrections, or comments? Background Technology: Dish/Engine Status: Currently Non
Concentrating Solar Power Projects - Sierra SunTower | Concentrating Solar
Turbine Capacity: Net: 5.0 MW Gross: 5.0 MW Status: Currently Non-Operational Start Year: 2009 Do you have more information, corrections, or comments? Background Technology: Power tower Status: Currently Non
Concentrating Solar Power Projects - SunCan Dunhuang 10 MW Phase I |
: Operational Start Year: 2016 Do you have more information, corrections, or comments? Background Technology Break Ground: August 30, 2014 Start Production: December 26, 2016 Cost (approx): 420 RMB million
Concentrating Solar Power Projects - Huanghe Qinghai Delingha 135 MW DSG
development Start Year: 2017 Do you have more information, corrections, or comments? Background Technology PDF Break Ground: 2015 Start Production: 2017 PPA/Tariff Date: September 1, 2016 PPA/Tariff Type: Feed
Concentrating Solar Power Projects - IRESEN 1 MWe CSP-ORC pilot project |
Start Year: 2016 Do you have more information, corrections, or comments? Background Technology: Linear : 1,700 MWh/yr Contact(s): Webmaster Solar Break Ground: 2015 Start Production: September 2016 Cost
Doppler tracking in time-dependent cosmological spacetimes
NASA Astrophysics Data System (ADS)
Giulini, Domenico; Carrera, Matteo
I will discuss the theoretical problems associated with Doppler tracking in time dependent background geometries, where ordinary Newtonian kinematics fails. A derivation of an exact general-relativistic formula for the two-way Doppler tracking of a spacecraft in homogeneous and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetimes is presented, as well as a controlled approximation in McVittie spacetimes representing an FLRW background with a single spherically-symmetric inhomogeneity (e.g. a single star or black hole). The leading-order corrections of the acceleration as compared to the Newtonian expression are calculated, which are due to retardation and cosmological expansion and which in the Solar System turn out to be significantly below the scale (nanometer per square-second) set by the Pioneer Anomaly. Last, but not least, I discuss kinematical ambiguities connected with notions of "simultaneity" and "spatial distance", which, in principle, also lead to tracking corrections.
NASA Astrophysics Data System (ADS)
Bruni, Marco; Thomas, Daniel B.; Wands, David
2014-02-01
We present the first calculation of an intrinsically relativistic quantity, the leading-order correction to Newtonian theory, in fully nonlinear cosmological large-scale structure studies. Traditionally, nonlinear structure formation in standard ΛCDM cosmology is studied using N-body simulations, based on Newtonian gravitational dynamics on an expanding background. When one derives the Newtonian regime in a way that is a consistent approximation to the Einstein equations, the first relativistic correction to the usual Newtonian scalar potential is a gravitomagnetic vector potential, giving rise to frame dragging. At leading order, this vector potential does not affect the matter dynamics, thus it can be computed from Newtonian N-body simulations. We explain how we compute the vector potential from simulations in ΛCDM and examine its magnitude relative to the scalar potential, finding that the power spectrum of the vector potential is of the order 10-5 times the scalar power spectrum over the range of nonlinear scales we consider. On these scales the vector potential is up to two orders of magnitudes larger than the value predicted by second-order perturbation theory extrapolated to the same scales. We also discuss some possible observable effects and future developments.
Cosmology of the closed string tachyon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swanson, Ian
2008-09-15
The spacetime physics of bulk closed string tachyon condensation is studied at the level of a two-derivative effective action. We derive the unique perturbative tachyon potential consistent with a full class of linearized tachyonic deformations of supercritical string theory. The solutions of interest deform a general linear dilaton background by the insertion of purely exponential tachyon vertex operators. In spacetime, the evolution of the tachyon drives an accelerated contraction of the universe and, absent higher-order corrections, the theory collapses to a cosmological singularity in finite time, at arbitrarily weak string coupling. When the tachyon exhibits a null symmetry, the worldsheetmore » dynamics is known to be exact and well defined at tree level. We prove that if the two-derivative effective action is free of nongravitational singularities, higher-order corrections always resolve the spacetime curvature singularity of the null tachyon. The resulting theory provides an explicit mechanism by which tachyon condensation can generate or terminate the flow of cosmological time in string theory. Additional particular solutions can resolve an initial singularity with a tachyonic phase at weak coupling, or yield solitonic configurations that localize the universe along spatial directions.« less
Lefave, Melissa; Harrell, Brad; Wright, Molly
2016-06-01
The purpose of this project was to assess the ability of anesthesiologists, nurse anesthetists, and registered nurses to correctly identify anatomic landmarks of cricoid pressure and apply the correct amount of force. The project included an educational intervention with one group pretest-post-test design. Participants demonstrated cricoid pressure on a laryngotracheal model. After an educational intervention video, participants were asked to repeat cricoid pressure on the model. Participants with a nurse anesthesia background applied more appropriate force pretest than other participants; however, post-test results, while improved, showed no significant difference among providers. Participant identification of the correct anatomy of the cricoid cartilage and application of correct force were significantly improved after education. This study revealed that participants lacked prior knowledge of correct cricoid anatomy and pressure as well as the ability to apply correct force to the laryngotracheal model before an educational intervention. The intervention used in this study proved successful in educating health care providers. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.
Star formation history from the cosmic infrared background anisotropies
NASA Astrophysics Data System (ADS)
Maniyar, A. S.; Béthermin, M.; Lagache, G.
2018-06-01
We present a linear clustering model of cosmic infrared background (CIB) anisotropies at large scales that is used to measure the cosmic star formation rate density up to redshift 6, the effective bias of the CIB, and the mass of dark matter halos hosting dusty star-forming galaxies. This is achieved using the Planck CIB auto- and cross-power spectra (between different frequencies) and CIB × CMB (cosmic microwave background) lensing cross-spectra measurements, as well as external constraints (e.g. on the CIB mean brightness). We recovered an obscured star formation history which agrees well with the values derived from infrared deep surveys and we confirm that the obscured star formation dominates the unobscured formation up to at least z = 4. The obscured and unobscured star formation rate densities are compatible at 1σ at z = 5. We also determined the evolution of the effective bias of the galaxies emitting the CIB and found a rapid increase from 0.8 at z = 0 to 8 at z = 4. At 2 < z < 4, this effective bias is similar to that of galaxies at the knee of the mass functions and submillimetre galaxies. This effective bias is the weighted average of the true bias with the corresponding emissivity of the galaxies. The halo mass corresponding to this bias is thus not exactly the mass contributing the most to the star formation density. Correcting for this, we obtained a value of log(Mh/M⊙) = 12.77-0.125+0.128 for the mass of the typical dark matter halo contributing to the CIB at z = 2. Finally, using a Fisher matrix analysis we also computed how the uncertainties on the cosmological parameters affect the recovered CIB model parameters, and find that the effect is negligible.
Novel Principles and Techniques to Create a Natural Design in Female Hairline Correction Surgery
2015-01-01
Abstract Background: Female hairline correction surgery is becoming increasingly popular. However, no guidelines or methods of female hairline design have been introduced to date. Methods: The purpose of this study was to create an initial framework based on the novel principles of female hairline design and then use artistic ability and experience to fine tune this framework. An understanding of the concept of 5 areas (frontal area, frontotemporal recess area, temporal peak, infratemple area, and sideburns) and 5 points (C, A, B, T, and S) is required for female hairline correction surgery (the 5A5P principle). The general concepts of female hairline correction surgery and natural design methods are, herein, explained with a focus on the correlations between these 5 areas and 5 points. Results: A natural and aesthetic female hairline can be created with application of the above-mentioned concepts. Conclusion: The 5A5P principle of forming the female hairline is very useful in female hairline correction surgery. PMID:26894014
Health Numeracy: The Importance of Domain in Assessing Numeracy
Levy, Helen; Ubel, Peter A.; Dillard, Amanda J.; Weir, David R.; Fagerlin, Angela
2014-01-01
Background Existing research concludes that measures of general numeracy can be used to predict individuals’ ability to assess health risks. We posit that the domain in which questions are posed affects the ability to perform mathematical tasks, raising the possibility of a separate construct of “health numeracy” that is distinct from general numeracy. Objective To determine whether older adults’ ability to perform simple math depends on domain. Design Community-based participants completed four math questions posed in three different domains: a health domain, a financial domain, and a pure math domain. Participants 962 individuals aged 55 and older, representative of the community-dwelling U.S. population over age 54. Results We found that respondents performed significantly worse when questions were posed in the health domain (54 percent correct) than in either the pure math domain (66 percent correct) or the financial domain (63 percent correct). Limitations Our experimental measure of numeracy consisted of only four questions, and it is possible that the apparent effect of domain is specific to the mathematical tasks that these questions require. Conclusions These results suggest that health numeracy is strongly related to general numeracy but that the two constructs may not be the same. Further research is needed into how different aspects of general numeracy and health numeracy translate into actual medical decisions. PMID:23824401
Treatment of impulsive aggression in correctional settings.
Shelton, Deborah; Sampl, Susan; Kesten, Karen L; Zhang, Wanli; Trestman, Robert L
2009-01-01
This article reports the implementation of Dialectical Behavioral Therapy-Corrections Modified (DBT-CM) for difficult to manage, impulsive and/or aggressive correctional populations. Participants were English-speaking women (n = 18) and men (n = 45) of diverse cultural backgrounds between the ages of 16 and 59 years old retained in state-run prisons in Connecticut. Following consent, and a psychological assessment battery, twice-weekly DBT-CM groups were held over 16 weeks followed by random assignment to DBT coaching or case management condition, with sessions taking place individually for eight weeks. Data analysis. A mixed effects regression model was used to test the hypotheses: participants will show decreased aggression, impulsivity, and psychopathology, as well as improved coping, after completing the DBT-CM groups; and will show greater reduction in targeted behaviors than those receiving case management at the six month and 12 month follow-up assessment periods. Significant reduction in targeted behavior was found from baseline to following the 16 week DBT-CM skills treatment groups. Both case management and DBT coaching were significant at 12 month follow-up. A significant difference was found for adult men and women. The study supports the value of DBT-CM for management of aggressive behaviors in prison settings. (c) 2009 John Wiley & Sons, Ltd.
Class III correction using an inter-arch spring-loaded module
2014-01-01
Background A retrospective study was conducted to determine the cephalometric changes in a group of Class III patients treated with the inter-arch spring-loaded module (CS2000®, Dynaflex, St. Ann, MO, USA). Methods Thirty Caucasian patients (15 males, 15 females) with an average pre-treatment age of 9.6 years were treated consecutively with this appliance and compared with a control group of subjects from the Bolton-Brush Study who were matched in age, gender, and craniofacial morphology to the treatment group. Lateral cephalograms were taken before treatment and after removal of the CS2000® appliance. The treatment effects of the CS2000® appliance were calculated by subtracting the changes due to growth (control group) from the treatment changes. Results All patients were improved to a Class I dental arch relationship with a positive overjet. Significant sagittal, vertical, and angular changes were found between the pre- and post-treatment radiographs. With an average treatment time of 1.3 years, the maxillary base moved forward by 0.8 mm, while the mandibular base moved backward by 2.8 mm together with improvements in the ANB and Wits measurements. The maxillary incisor moved forward by 1.3 mm and the mandibular incisor moved forward by 1.0 mm. The maxillary molar moved forward by 1.0 mm while the mandibular molar moved backward by 0.6 mm. The average overjet correction was 3.9 mm and 92% of the correction was due to skeletal contribution and 8% was due to dental contribution. The average molar correction was 5.2 mm and 69% of the correction was due to skeletal contribution and 31% was due to dental contribution. Conclusions Mild to moderate Class III malocclusion can be corrected using the inter-arch spring-loaded appliance with minimal patient compliance. The overjet correction was contributed by forward movement of the maxilla, backward and downward movement of the mandible, and proclination of the maxillary incisors. The molar relationship was corrected by mesialization of the maxillary molars, distalization of the mandibular molars together with a rotation of the occlusal plane. PMID:24934153
Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki
2016-02-01
Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.
Context-Sensitive Spelling Correction of Consumer-Generated Content on Health Care
Chen, Rudan; Zhao, Xianyang; Xu, Wei; Cheng, Wenqing; Lin, Simon
2015-01-01
Background Consumer-generated content, such as postings on social media websites, can serve as an ideal source of information for studying health care from a consumer’s perspective. However, consumer-generated content on health care topics often contains spelling errors, which, if not corrected, will be obstacles for downstream computer-based text analysis. Objective In this study, we proposed a framework with a spelling correction system designed for consumer-generated content and a novel ontology-based evaluation system which was used to efficiently assess the correction quality. Additionally, we emphasized the importance of context sensitivity in the correction process, and demonstrated why correction methods designed for electronic medical records (EMRs) failed to perform well with consumer-generated content. Methods First, we developed our spelling correction system based on Google Spell Checker. The system processed postings acquired from MedHelp, a biomedical bulletin board system (BBS), and saved misspelled words (eg, sertaline) and corresponding corrected words (eg, sertraline) into two separate sets. Second, to reduce the number of words needing manual examination in the evaluation process, we respectively matched the words in the two sets with terms in two biomedical ontologies: RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT). The ratio of words which could be matched and appropriately corrected was used to evaluate the correction system’s overall performance. Third, we categorized the misspelled words according to the types of spelling errors. Finally, we calculated the ratio of abbreviations in the postings, which remarkably differed between EMRs and consumer-generated content and could largely influence the overall performance of spelling checkers. Results An uncorrected word and the corresponding corrected word was called a spelling pair, and the two words in the spelling pair were its members. In our study, there were 271 spelling pairs detected, among which 58 (21.4%) pairs had one or two members matched in the selected ontologies. The ratio of appropriate correction in the 271 overall spelling errors was 85.2% (231/271). The ratio of that in the 58 spelling pairs was 86% (50/58), close to the overall ratio. We also found that linguistic errors took up 31.4% (85/271) of all errors detected, and only 0.98% (210/21,358) of words in the postings were abbreviations, which was much lower than the ratio in the EMRs (33.6%). Conclusions We conclude that our system can accurately correct spelling errors in consumer-generated content. Context sensitivity is indispensable in the correction process. Additionally, it can be confirmed that consumer-generated content differs from EMRs in that consumers seldom use abbreviations. Also, the evaluation method, taking advantage of biomedical ontology, can effectively estimate the accuracy of the correction system and reduce manual examination time. PMID:26232246
Treatment of Male Sexual Offenders in a Correctional Facility.
ERIC Educational Resources Information Center
Whitford, Robert W.
1987-01-01
Provides some background and treatment perspectives for counselors and psychologists who treat or contemplate treatment of adult male sexual offenders in prison settings. Discusses identification, assessment, amenability to treatment, assessment instruments, and treatment of sexual offenders. (ABL)
Concentrating Solar Power Projects - NOOR II | Concentrating Solar Power |
Status: Operational Start Year: 2018 Do you have more information, corrections, or comments? Background Break Ground: May 2015 Start Production: January 10, 2018 PPA/Tariff Rate: 1.36 MAD per kWh PPA/Tariff
NASA Astrophysics Data System (ADS)
Cremaschini, Claudio; Stuchlík, Zdeněk
2018-05-01
A test fluid composed of relativistic collisionless neutral particles in the background of Kerr metric is expected to generate non-isotropic equilibrium configurations in which the corresponding stress-energy tensor exhibits pressure and temperature anisotropies. This arises as a consequence of the constraints placed on single-particle dynamics by Killing tensor symmetries, leading to a peculiar non-Maxwellian functional form of the kinetic distribution function describing the continuum system. Based on this outcome, in this paper the generation of Kerr-like metric by collisionless N -body systems of neutral matter orbiting in the field of a rotating black hole is reported. The result is obtained in the framework of covariant kinetic theory by solving the Einstein equations in terms of an analytical perturbative treatment whereby the gravitational field is decomposed as a prescribed background metric tensor described by the Kerr solution plus a self-field correction. The latter one is generated by the uncharged fluid at equilibrium and satisfies the linearized Einstein equations having the non-isotropic stress-energy tensor as source term. It is shown that the resulting self-metric is again of Kerr type, providing a mechanism of magnification of the background metric tensor and its qualitative features.
General relativistic corrections in density-shear correlations
NASA Astrophysics Data System (ADS)
Ghosh, Basundhara; Durrer, Ruth; Sellentin, Elena
2018-06-01
We investigate the corrections which relativistic light-cone computations induce on the correlation of the tangential shear with galaxy number counts, also known as galaxy-galaxy lensing. The standard-approach to galaxy-galaxy lensing treats the number density of sources in a foreground bin as observable, whereas it is in reality unobservable due to the presence of relativistic corrections. We find that already in the redshift range covered by the DES first year data, these currently neglected relativistic terms lead to a systematic correction of up to 50% in the density-shear correlation function for the highest redshift bins. This correction is dominated by the fact that a redshift bin of number counts does not only lens sources in a background bin, but is itself again lensed by all masses between the observer and the counted source population. Relativistic corrections are currently ignored in the standard galaxy-galaxy analyses, and the additional lensing of a counted source populations is only included in the error budget (via the covariance matrix). At increasingly higher redshifts and larger scales, these relativistic and lensing corrections become however increasingly more important, and we here argue that it is then more efficient, and also cleaner, to account for these corrections in the density-shear correlations.
Liew, Steven; Scamp, Terrence; de Maio, Mauricio; Halstead, Michael; Johnston, Nicole; Silberberg, Michael; Rogers, John D.
2016-01-01
Background There is increasing interest among patients and plastic surgeons for alternatives to rhinoplasty, a common surgical procedure performed in Asia. Objectives To evaluate the safety, efficacy, and longevity of a hyaluronic acid filler in the correction of aesthetically detracting or deficient features of the Asian nose. Methods Twenty-nine carefully screened Asian patients had their noses corrected with the study filler (Juvéderm VOLUMA [Allergan plc, Dublin, Ireland] with lidocaine injectable gel), reflecting individualized treatment goals and utilizing a standardized injection procedure, and were followed for over 12 months. Results A clinically meaningful correction (≥1 grade improvement on the Assessment of Aesthetic Improvement Scale) was achieved in 27 (93.1%) patients at the first follow-up visit. This was maintained in 28 (96.6%) patients at the final visit, based on the independent assessments of a central non-injecting physician and the patients. At this final visit, 23 (79.3%) patients were satisfied or very satisfied with the study filler and 25 (86.2%) would recommend it to others. In this small series of patients, there were no serious adverse events (AEs), with all treatment-related AEs being mild to moderate, transient injection site reactions, unrelated to the study filler. Conclusions Using specific eligibility criteria, individualized treatment goals, and a standardized injection procedure, the study filler corrected aesthetically detracting or deficient features of the Asian nose, with the therapeutic effects lasting for over 12 months, consistent with a high degree of patient satisfaction. This study supports the safety and efficacy of this HA filler for specific nose augmentation procedures in selected Asian patients. Level of Evidence: 3 Therapeutic PMID:27301371
Rajkumar, D. S. R.; Faitelson, A. V.; Gudyrev, O. S.; Dubrovin, G. M.; Pokrovski, M. V.; Ivanov, A. V.
2013-01-01
In the experiment on the white Wistar female rats (222 animals), the osteoprotective effect of enalapril and losartan was studied on experimental models of osteoporosis and osteoporotic fractures. It was revealed that in rats after ovariectomy, the endothelial dysfunction of microcirculation vessels of osteal tissue develops, resulting in occurrence of osteoporosis and delay of consolidation of experimental fractures. Enalapril and losartan prevented the reduction of microcirculation in bone, which was reflected in slowing the thinning of bone trabeculae and in preventing the occurrence of these microfractures, as well as increasing quality of experimental fractures healing. PMID:23401845
Survey of background scattering from materials found in small-angle neutron scattering.
Barker, J G; Mildner, D F R
2015-08-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
2011-01-01
Background Bracing is an effective strategy for scoliosis treatment, but there is no consensus on the best type of brace, nor on the way in which it should act on the spine to achieve good correction. The aim of this paper is to present the family of SPoRT (Symmetric, Patient-oriented, Rigid, Three-dimensional, active) braces: Sforzesco (the first introduced), Sibilla and Lapadula. Methods The Sforzesco brace was developed following specific principles of correction. Due to its overall symmetry, the brace provides space over pathological depressions and pushes over elevations. Correction is reached through construction of the envelope, pushes, escapes, stops, and drivers. The real novelty is the drivers, introduced for the first time with the Sforzesco brace; they allow to achieve the main action of the brace: a three-dimensional elongation pushing the spine in a down-up direction. Brace prescription is made plane by plane: frontal (on the "slopes", another novelty of this concept, i.e. the laterally flexed sections of the spine), horizontal, and sagittal. The brace is built modelling the trunk shape obtained either by a plaster cast mould or by CAD-CAM construction. Brace checking is essential, since SPoRT braces are adjustable and customisable according to each individual curve pattern. Treatment time and duration is individually tailored (18-23 hours per day until Risser 3, then gradual reduction). SEAS (Scientific Exercises Approach to Scoliosis) exercises are a key factor to achieve success. Results The Sforzesco brace has shown to be more effective than the Lyon brace (matched case/control), equally effective as the Risser plaster cast (prospective cohort with retrospective controls), more effective than the Risser cast + Lyon brace in treating curves over 45 degrees Cobb (prospective cohort), and is able to improve aesthetic appearance (prospective cohort). Conclusions The SPoRT concept of bracing (three-dimensional elongation pushing in a down-up direction) is different from the other corrective systems: 3-point, traction, postural, and movement-based. The Sforzesco brace, being comparable to casting, may be the best brace for the worst cases. PMID:21554719
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Morgan N.; Arkin, Adam P.
Free-living bacteria are usually thought to have large effective population sizes, and so tiny selective differences can drive their evolution. However, because recombination is infrequent, “background selection” against slightly deleterious alleles should reduce the effective population size (N e) by orders of magnitude. For example, for a well-mixed population with 10 12 individuals and a typical level of homologous recombination (r/m= 3, i.e., nucleotide changes due to recombination [r] occur at 3 times the mutation rate [m]), we predict that N e is<10 7. An argument for high N e values for bacteria has been the high genetic diversity withinmore » many bacterial “species,” but this diversity may be due to population structure: diversity across subpopulations can be far higher than diversity within a subpopulation, which makes it difficult to estimate N e correctly. Given an estimate ofN e, standard population genetics models imply that selection should be sufficient to drive evolution if N e ×s is >1, where s is the selection coefficient. We found that this remains approximately correct if background selection is occurring or when population structure is present. Overall, we predict that even for free-living bacteria with enormous populations, natural selection is only a significant force ifs is above 10 -7 or so. Because bacteria form huge populations with trillions of individuals, the simplest theoretical prediction is that the better allele at a site would predominate even if its advantage was just 10 -9 per generation. In other words, virtually every nucleotide would be at the local optimum in most individuals. A more sophisticated theory considers that bacterial genomes have millions of sites each and selection events on these many sites could interfere with each other, so that only larger effects would be important. However, bacteria can exchange genetic material, and in principle, this exchange could eliminate the interference between the evolution of the sites. In conclusion, we used simulations to confirm that during multisite evolution with realistic levels of recombination, only larger effects are important. We propose that advantages of less than 10 -7are effectively neutral.« less
Price, Morgan N.; Arkin, Adam P.
2015-12-15
Free-living bacteria are usually thought to have large effective population sizes, and so tiny selective differences can drive their evolution. However, because recombination is infrequent, “background selection” against slightly deleterious alleles should reduce the effective population size (N e) by orders of magnitude. For example, for a well-mixed population with 10 12 individuals and a typical level of homologous recombination (r/m= 3, i.e., nucleotide changes due to recombination [r] occur at 3 times the mutation rate [m]), we predict that N e is<10 7. An argument for high N e values for bacteria has been the high genetic diversity withinmore » many bacterial “species,” but this diversity may be due to population structure: diversity across subpopulations can be far higher than diversity within a subpopulation, which makes it difficult to estimate N e correctly. Given an estimate ofN e, standard population genetics models imply that selection should be sufficient to drive evolution if N e ×s is >1, where s is the selection coefficient. We found that this remains approximately correct if background selection is occurring or when population structure is present. Overall, we predict that even for free-living bacteria with enormous populations, natural selection is only a significant force ifs is above 10 -7 or so. Because bacteria form huge populations with trillions of individuals, the simplest theoretical prediction is that the better allele at a site would predominate even if its advantage was just 10 -9 per generation. In other words, virtually every nucleotide would be at the local optimum in most individuals. A more sophisticated theory considers that bacterial genomes have millions of sites each and selection events on these many sites could interfere with each other, so that only larger effects would be important. However, bacteria can exchange genetic material, and in principle, this exchange could eliminate the interference between the evolution of the sites. In conclusion, we used simulations to confirm that during multisite evolution with realistic levels of recombination, only larger effects are important. We propose that advantages of less than 10 -7are effectively neutral.« less
NASA Astrophysics Data System (ADS)
Cuesta-Lazaro, Carolina; Quera-Bofarull, Arnau; Reischke, Robert; Schäfer, Björn Malte
2018-06-01
When the gravitational lensing of the large-scale structure is calculated from a cosmological model a few assumptions enter: (i) one assumes that the photons follow unperturbed background geodesics, which is usually referred to as the Born approximation, (ii) the lenses move slowly, (iii) the source-redshift distribution is evaluated relative to the background quantities, and (iv) the lensing effect is linear in the gravitational potential. Even though these approximations are small individually they could sum up, especially since they include local effects such as the Sachs-Wolfe and peculiar motion, but also non-local ones like the Born approximation and the integrated Sachs-Wolfe effect. In this work, we will address all points mentioned and perturbatively calculate the effect on a tomographic cosmic shear power spectrum of each effect individually as well as all cross-correlations. Our findings show that each effect is at least 4-5 orders of magnitude below the leading order lensing signal. Finally, we sum up all effects to estimate the overall impact on parameter estimation by a future cosmological weak-lensing survey such as Euclid in a wcold dark matter cosmology with parametrization Ωm, σ8, ns, h, w0, and wa, using five tomographic bins. We consistently find a parameter bias of 10-5, which is therefore completely negligible for all practical purposes, confirming that other effects such as intrinsic alignments, magnification bias and uncertainties in the redshift distribution will be the dominant systematic source in future surveys.
Reexamination of the effective fine structure constant of graphene as measured in graphite
Gan, Yu; de la Pena Munoz, Gilberto; Kogar, Anshul; ...
2016-05-24
Here we present a refined and improved study of the influence of screening on the effective fine structure constant of graphene, α*, as measured in graphite using inelastic x-ray scattering. This followup to our previous study [J. P. Reed et al., Science 330, 805 (2010)] was carried out with two times better energy resolution, five times better momentum resolution, and an improved experimental setup with lower background. We compare our results to random-phase approximation (RPA) calculations and evaluate the relative importance of interlayer hopping, excitonic corrections, and screening from high energy excitations involving the sigma bands. We find that themore » static, limiting value of α* falls in the range 0.25-0.35, which is higher than our previous result of 0.14, but still below the value expected from RPA. We show the reduced value is not a consequence of interlayer hopping effects, which were ignored in our previous analysis, but of a combination of excitonic effects in the π→ π* particle-hole continuum, and background screening from the σ-bonded electrons. We find that σ-band screening is extremely strong at distances of less than a few nanometers, and should be highly effective at screening out short-distance, Hubbard-like interactions in graphene as well as other carbon allotropes.« less
Global mapping of the surface of Titan through the haze with VIMS onboard Cassini
NASA Astrophysics Data System (ADS)
Le Mouélic, Stéphane; Cornet, Thomas; Rodriguez, Sébastien; Sotin, Christophe; Barnes, Jason W.; Brown, Robert H.; Lasue, Jérémie; Baines, K. H.; Buratti, Bonnie; Clark, Roger Nelson; Nicholson, Philip D.
2016-10-01
The Visual and Infrared Mapping Spectrometer (VIMS) onboard Cassini observes the surface of Titan through the atmosphere in seven narrow spectral windows in the infrared at 0.93, 1.08, 1.27, 1.59, 2.01, 2.68-2.78, and 4.9-5.1 microns. We have produced a global hyperspectral mosaic at 32 pixels per degrees of the complete VIMS data set of Titan between T0 (July 2004) and T120 (June 2016) flybys. We merged all the data cubes sorted by increasing spatial resolution, with the high resolution images on top of the mosaic and the low resolution images used as background. One of the main challenge in producing global spectral composition maps is to remove the seams between individual frames taken throughout the entire mission. These seams are mainly due to the widely varying viewing angles between data acquired during the different Titan flybys. These angles induce significant surface photometric effects and a strongly varying atmospheric (absorption and scattering) contribution, the scattering of the atmosphere being all the more present than the wavelength is short. We have implemented a series of empirical corrections to homogenize the maps, by correcting at first order for photometric and atmospheric scattering effects. Recently, the VIMS' IR wavelength calibration has been observed to be drifting from a total of a few nm toward longer wavelengths, the drift being almost continuously present over the course of the mission. Whereas minor at first order, this drift has implications on the homogeneity of the maps when trying to fit images taken at the beginning of the mission with images taken near the end, in particular when using channels in the narrowest atmospheric spectral windows. A correction scheme has been implemented to account for this subtle effect.
NASA Technical Reports Server (NTRS)
Kim, Mijin; Kim, Jhoon; Wong, Man Sing; Yoon, Jongmin; Lee, Jaehwa; Wu, Dong L.; Chan, P.W.; Nichol, Janet E.; Chung, Chu-Yong; Ou, Mi-Lim
2014-01-01
Despite continuous efforts to retrieve aerosol optical depth (AOD) using a conventional 5-channelmeteorological imager in geostationary orbit, the accuracy in urban areas has been poorer than other areas primarily due to complex urban surface properties and mixed aerosol types from different emission sources. The two largest error sources in aerosol retrieval have been aerosol type selection and surface reflectance. In selecting the aerosol type from a single visible channel, the season-dependent aerosol optical properties were adopted from longterm measurements of Aerosol Robotic Network (AERONET) sun-photometers. With the aerosol optical properties obtained fromthe AERONET inversion data, look-up tableswere calculated by using a radiative transfer code: the Second Simulation of the Satellite Signal in the Solar Spectrum (6S). Surface reflectance was estimated using the clear sky composite method, awidely used technique for geostationary retrievals. Over East Asia, the AOD retrieved from the Meteorological Imager showed good agreement, although the values were affected by cloud contamination errors. However, the conventional retrieval of the AOD over Hong Kong was largely underestimated due to the lack of information on the aerosol type and surface properties. To detect spatial and temporal variation of aerosol type over the area, the critical reflectance method, a technique to retrieve single scattering albedo (SSA), was applied. Additionally, the background aerosol effect was corrected to improve the accuracy of the surface reflectance over Hong Kong. The AOD retrieved froma modified algorithmwas compared to the collocated data measured by AERONET in Hong Kong. The comparison showed that the new aerosol type selection using the critical reflectance and the corrected surface reflectance significantly improved the accuracy of AODs in Hong Kong areas,with a correlation coefficient increase from0.65 to 0.76 and a regression line change from tMI [basic algorithm] = 0.41tAERONET + 0.16 to tMI [new algorithm] = 0.70tAERONET + 0.01.
Automatic correction of dental artifacts in PET/MRI
Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune. H.; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois
2015-01-01
Abstract. A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 F18-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of 97±3% of the artifact areas. PMID:26158104
Iodice, Giorgio; d'Antò, Vincenzo; Riccitiello, Francesco; Pellegrino, Gioacchino; Valletta, Rosa
2014-01-01
Background. This case report describes the orthodontic treatment of a woman, aged 17 years, with a permanent dentition, brachyfacial typology, Angle Class I, with full impaction of two canines (13,33), and a severe ectopy of the maxillary left canine. Her main compliant was the position of the ectopic teeth. Methods. Straightwire fixed appliances, together with cantilever mechanics, were used to correct the impaired occlusion and to obtain an ideal torque control. Results and Conclusion. The treatment objectives were achieved in 26 months of treatment. The impactions were fully corrected with an optimal torque. The cantilever mechanics succeeded in obtaining tooth repositioning in a short lapse of time. After treatment, the dental alignment was stable. PMID:25140261
Paduano, Sergio; Cioffi, Iacopo; Iodice, Giorgio; d'Antò, Vincenzo; Riccitiello, Francesco; Pellegrino, Gioacchino; Valletta, Rosa
2014-01-01
Background. This case report describes the orthodontic treatment of a woman, aged 17 years, with a permanent dentition, brachyfacial typology, Angle Class I, with full impaction of two canines (13,33), and a severe ectopy of the maxillary left canine. Her main compliant was the position of the ectopic teeth. Methods. Straightwire fixed appliances, together with cantilever mechanics, were used to correct the impaired occlusion and to obtain an ideal torque control. Results and Conclusion. The treatment objectives were achieved in 26 months of treatment. The impactions were fully corrected with an optimal torque. The cantilever mechanics succeeded in obtaining tooth repositioning in a short lapse of time. After treatment, the dental alignment was stable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casadio, Roberto; Orlandi, Alessio; Kühnel, Florian, E-mail: casadio@bo.infn.it, E-mail: florian.kuhnel@fysik.su.se, E-mail: aorlandi@bo.infn.it
Following a new quantum cosmological model proposed by Dvali and Gomez, we quantitatively investigate possible modifications to the Hubble parameter and following corrections to the cosmic microwave background spectrum. In this model, scalar and tensor perturbations are generated by the quantum depletion of the background inflaton and graviton condensate respectively. We show how the inflaton mass affects the power spectra and the tensor-to-scalar ratio. Masses approaching the Planck scale would lead to strong deviations, while standard spectra are recovered for an inflaton mass much smaller than the Planck mass.
The effects of center of rotation errors on cardiac SPECT imaging
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.
2003-10-01
In SPECT imaging, center of rotation (COR) errors lead to the misalignment of projection data and can potentially degrade the quality of the reconstructed images. In this work, we study the effects of COR errors on cardiac SPECT imaging using simulation, point source, cardiac phantom, and patient studies. For simulation studies, we generate projection data using a uniform MCAT phantom first without modeling any physical effects (NPH), then with the modeling of detector response effect (DR) alone. We then corrupt the projection data with simulated sinusoid and step COR errors. For other studies, we introduce sinusoid COR errors to projection data acquired on SPECT systems. An OSEM algorithm is used for image reconstruction without detector response correction, but with nonuniform attenuation correction when needed. The simulation studies show that, when COR errors increase from 0 to 0.96 cm: 1) sinusoid COR errors in axial direction lead to intensity decrease in the inferoapical region; 2) step COR errors in axial direction lead to intensity decrease in the distal anterior region. The intensity decrease is more severe in images reconstructed from projection data with NPH than with DR; and 3) the effects of COR errors in transaxial direction seem to be insignificant. In other studies, COR errors slightly degrade point source resolution; COR errors of 0.64 cm or above introduce visible but insignificant nonuniformity in the images of uniform cardiac phantom; COR errors up to 0.96 cm in transaxial direction affect the lesion-to-background contrast (LBC) insignificantly in the images of cardiac phantom with defects, and COR errors up to 0.64 cm in axial direction only slightly decrease the LBC. For the patient studies with COR errors up to 0.96 cm, images have the same diagnostic/prognostic values as those without COR errors. This work suggests that COR errors of up to 0.64 cm are not likely to change the clinical applications of cardiac SPECT imaging when using iterative reconstruction algorithm without detector response correction.
Ho, Kai-Yu; Epstein, Ryan; Garcia, Ron; Riley, Nicole; Lee, Szu-Ping
2017-02-01
Study Design Controlled laboratory study. Background Although it has been theorized that patellofemoral joint (PFJ) taping can correct patellar malalignment, the effects of PFJ taping techniques on patellar alignment and contact area have not yet been studied during weight bearing. Objective To examine the effects of 2 taping approaches (Kinesio and McConnell) on PFJ alignment and contact area. Methods Fourteen female subjects with patellofemoral pain and PFJ malalignment participated. Each subject underwent a pretaping magnetic resonance imaging (MRI) scan session and 2 MRI scan sessions after the application of the 2 taping techniques, which aimed to correct lateral patellar displacement. Subjects were asked to report their pain level prior to each scan session. During MRI assessment, subjects were loaded with 25% of body weight on their involved/more symptomatic leg at 0°, 20°, and 40° of knee flexion. The outcome measures included patellar lateral displacement (bisect-offset [BSO] index), mediolateral patellar tilt angle, patellar height (Insall-Salvati ratio), contact area, and pain. Patellofemoral joint alignment and contact area were compared among the 3 conditions (no tape, Kinesio, and McConnell) at 3 knee angles using a 2-factor, repeated-measures analysis of variance. Pain was compared among the 3 conditions using the Friedman test and post hoc Wilcoxon signed-rank tests. Results Our data did not reveal any significant effects of either McConnell or Kinesio taping on the BSO index, patellar tilt angle, Insall-Salvati ratio, or contact area across the 3 knee angles, whereas knee angle had a significant effect on the BSO index and contact area. A reduction in pain was observed after the application of the Kinesio taping technique. Conclusion In a weight-bearing condition, this preliminary study did not support the use of PFJ taping as a medial correction technique to alter the PFJ contact area or alignment of the patella. J Orthop Sports Phys Ther 2017;47(2):115-123. doi:10.2519/jospt.2017.6936.
CULTURAL INTELLIGENCE ASSISTANCE FOR CROSS-CULTURE UNDERSTANDING AND ACTION RECOMMENDATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xiaohui; Potok, Thomas E; Xu, Songhua
2011-01-01
When traveling or working in a culturally diverse environment, it is demanding for a new comer to quickly notice, understand, and adapt to different culture norms to avoid cultural misunderstanding, and further to establish friendship with the local people. The main challenges include both correctly understanding the intent behind behaviors from people with a different cultural background, and effectively adjusting one s own behaviors to a local cultural setting to express one s intention without ambiguity. Quality cross-cultural assistance can help us accurately recognize the true purpose behind behaviors of a person from a different cultural background, and also advisemore » us to act properly in a new cultural setting. In this project, we aim at providing an advanced cultural intelligence assistance tool, implemented as a mobile application, to facilitate individual users to understand behaviors, norms, and conventions in a new culture, as well as to change their behaviors appropriately in the new cultural environment.« less
Inflation in the closed FLRW model and the CMB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonga, Béatrice; Gupt, Brajesh; Yokomizo, Nelson, E-mail: bpb165@psu.edu, E-mail: bgupt@gravity.psu.edu, E-mail: yokomizo@gravity.psu.edu
2016-10-01
Recent cosmic microwave background (CMB) observations put strong constraints on the spatial curvature via estimation of the parameter Ω{sub k} assuming an almost scale invariant primordial power spectrum. We study the evolution of the background geometry and gauge-invariant scalar perturbations in an inflationary closed FLRW model and calculate the primordial power spectrum. We find that the inflationary dynamics is modified due to the presence of spatial curvature, leading to corrections to the nearly scale invariant power spectrum at the end of inflation. When evolved to the surface of last scattering, the resulting temperature anisotropy spectrum ( C {sup TT}{sub ℓ})more » shows deficit of power at low multipoles (ℓ < 20). By comparing our results with the recent Planck data we discuss the role of spatial curvature in accounting for CMB anomalies and in the estimation of the parameter Ω{sub k}. Since the curvature effects are limited to low multipoles, the Planck estimation of cosmological parameters remains robust under inclusion of positive spatial curvature.« less
NASA Astrophysics Data System (ADS)
Wright, T.; Guerrero, C.; Billowes, J.; Cano-Ott, D.; Mendoza, E.; Altstadt, S.; Andrzejewski, J.; Audouin, L.; Bécares, V.; Barbagallo, M.; Bečvář, F.; Belloni, F.; Berthoumieux, E.; Bosnar, D.; Brugger, M.; Calviño, F.; Calviani, M.; Carrapiço, C.; Cerutti, F.; Chiaveri, E.; Chin, M.; Colonna, N.; Cortés, G.; Cortés-Giraldo, M. A.; Diakaki, M.; Dietz, M.; Domingo-Pardo, C.; Durán, I.; Dzysiuk, N.; Eleftheriadis, C.; Ferrari, A.; Fraval, K.; Furman, V.; Gómez-Hornillos, M. B.; Ganesan, S.; García, A. R.; Giubrone, G.; Gonçalves, I. F.; González-Romero, E.; Goverdovski, A.; Griesmayer, E.; Gunsing, F.; Gurusamy, P.; Heftrich, T.; Hernández-Prieto, A.; Jenkins, D. G.; Jericha, E.; Käppeler, F.; Kadi, Y.; Karadimos, D.; Katabuchi, T.; Ketlerov, V.; Khryachkov, V.; Koehler, P.; Kokkoris, M.; Kroll, J.; Krtička, M.; Lampoudis, C.; Langer, C.; Leal-Cidoncha, E.; Lederer, C.; Leeb, H.; Leong, L. S.; Lerendegui-Marco, J.; Losito, R.; Manousos, A.; Marganiec, J.; Martínez, T.; Massimi, C.; Mastinu, P.; Mengoni, A.; Milazzo, P. M.; Mingrone, F.; Mirea, M.; Paradela, C.; Pavlik, A.; Perkowski, J.; Praena, J.; Quesada, J. M.; Rauscher, T.; Reifarth, R.; Riego-Perez, A.; Robles, M.; Roman, F.; Rubbia, C.; Ryan, J. A.; Sabaté-Gilarte, M.; Sarmento, R.; Saxena, A.; Schillebeeckx, P.; Schmidt, S.; Schumann, D.; Sedyshev, P.; Tagliente, G.; Tain, J. L.; Tarifeño-Saldivia, A.; Tarrío, D.; Tassan-Got, L.; Tsinganis, A.; Valenta, S.; Vannini, G.; Variale, V.; Vaz, P.; Ventura, A.; Vermeulen, M. J.; Versaci, R.; Vlachoudis, V.; Vlastou, R.; Wallner, A.; Ware, T.; Weigand, M.; Weiss, C.; Žugec, P.; n TOF Collaboration
2017-12-01
The radiative capture cross section of a highly pure (99.999%), 6.125(2) grams and 9.56(5)×10-4 atoms/barn areal density 238U sample has been measured with the Total Absorption Calorimeter (TAC) in the 185 m flight path at the CERN neutron time-of-flight facility n_TOF. This measurement is in response to the NEA High Priority Request list, which demands an accuracy in this cross section of less than 3% below 25 keV. These data have undergone careful background subtraction, with special care being given to the background originating from neutrons scattered by the 238U sample. Pileup and dead-time effects have been corrected for. The measured cross section covers an energy range between 0.2 eV and 80 keV, with an accuracy that varies with neutron energy, being better than 4% below 25 keV and reaching at most 6% at higher energies.
Rejection of randomly coinciding 2ν2β events in ZnMoO4 scintillating bolometers
NASA Astrophysics Data System (ADS)
Chernyak, D. M.; Danevich, F. A.; Giuliani, A.; Mancuso, M.; Nones, C.; Olivieri, E.; Tenconi, M.; Tretyak, V. I.
2014-01-01
Random coincidence of 2ν2β decay events could be one of the main sources of background for 0ν2β decay in cryogenic bolometers due to their poor time resolution. Pulse-shape discrimination by using front edge analysis, the mean-time and χ2 methods was applied to discriminate randomly coinciding 2ν2β events in ZnMoO4 cryogenic scintillating bolometers. The background can be effectively rejected on the level of 99% by the mean-time analysis of heat signals with the rise time about 14 ms and the signal-to-noise ratio 900, and on the level of 98% for the light signals with 3 ms rise time and signal-to-noise ratio of 30 (under a requirement to detect 95% of single events). Importance of the signal-to-noise ratio, correct finding of the signal start and choice of an appropriate sampling frequency are discussed.
On the catalysis of the electroweak vacuum decay by black holes at high temperature
NASA Astrophysics Data System (ADS)
Canko, D.; Gialamas, I.; Jelic-Cizmek, G.; Riotto, A.; Tetradis, N.
2018-04-01
We study the effect of primordial black holes on the classical rate of nucleation of AdS regions within the standard electroweak vacuum at high temperature. We base our analysis on the assumption that, at temperatures much higher than the Hawking temperature, the main effect of the black hole is to distort the Higgs configuration dominating the transition to the new vacuum. We estimate the barrier for the transition by the ADM mass of this configuration, computed through the temperature-corrected Higgs potential. We find that the exponential suppression of the nucleation rate can be reduced significantly, or even eliminated completely, in the black-hole background if the Standard Model Higgs is coupled to gravity through the renormalizable term ξ R h^2.
Mussenbrock, T; Brinkmann, R P; Lieberman, M A; Lichtenberg, A J; Kawamura, E
2008-08-22
In low-pressure capacitive radio frequency discharges, two mechanisms of electron heating are dominant: (i) Ohmic heating due to collisions of electrons with neutrals of the background gas and (ii) stochastic heating due to momentum transfer from the oscillating boundary sheath. In this work we show by means of a nonlinear global model that the self-excitation of the plasma series resonance which arises in asymmetric capacitive discharges due to nonlinear interaction of plasma bulk and sheath significantly affects both Ohmic heating and stochastic heating. We observe that the series resonance effect increases the dissipation by factors of 2-5. We conclude that the nonlinear plasma dynamics should be taken into account in order to describe quantitatively correct electron heating in asymmetric capacitive radio frequency discharges.
An AK-LDMeans algorithm based on image clustering
NASA Astrophysics Data System (ADS)
Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan
2018-03-01
Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.
Interest of corrective makeup in the management of patients in dermatology
Seité, S; Deshayes, P; Dréno, B; Misery, L; Reygagne, P; Saiag, P; Stengel, F; Roguedas-Contios, AM; Rougier, A
2012-01-01
Background Disfiguring dermatoses may have a significant impact on patients’ quality of life, namely on their relationship with others, self image, and self esteem. Some previous studies have suggested that corrective foundation can improve the quality of life (QOL) of patients with facial dermatoses; in particular, in patients with acne vulgaris or pigmentary disorders. Objective The aim of this prospective study was to evaluate the impact of the skin conditions of patients with various skin diseases affecting their face (scars, acne, rosacea, melasma, vitiligo, hypo or hyperpigmentation, lentigines, etc) on their QOL and the improvement afforded by the use of corrective makeup for 1 month after being instructed on how to use it by a medical cosmetician during an initial medical consultation. Methods One hundred and twenty-nine patients with various skin diseases affecting the patients’ face were investigated. The patients were instructed by a cosmetician on how to use corrective makeup (complexion, eyes, and lips) and applied it for 1 month. The safety of the makeup application was evaluated and the QOL was assessed via a questionnaire (DLQI) and using a 10-cm visual analog scale (VAS) completed before the first application and at the final visit. The amelioration of their appearance was documented by standardized photography. Results No side effects occurred during the course of the study. A comparison of the standardized photographs taken at each visit showed the patients’ significant improvement in appearance due to the application of corrective makeup. The mean DLQI score dropped significantly from 9.90 ± 0.73 to 3.49 ± 0.40 (P < 0.0001). Conclusion Our results suggest that dermatologists should encourage patients with disfiguring dermatoses to utilize appropriate and safe makeup to improve their appearance and their QOL. Corrective makeup can also complement the treatment of face dermatological diseases in order to improve patient’s adherence. PMID:23055760
VizieR Online Data Catalog: NGC3115 & NGC1399 VEGAS-SSS globular clusters (Cantiello+, 2018)
NASA Astrophysics Data System (ADS)
Cantiello, M.; D'Abrusco, R.; Spavone, M.; Paolillo, M.; Capaccioli, M.; Limatola, L.; Grado, A.; Iodice, E.; Raimondo, G.; Napolitano, N.; Blakeslee, J. P.; Brocato, E.; Forbes, D. A.; Hilker, M.; Mieske, S.; Peletier, R.; van de Ven, G.; Schipani, P.
2017-11-01
Photometric catalogs for globular cluster (GC) candidates over the the 1 sq. degree area around NGC3115 and NGC1399 (ngc3115.dat and ngc1399.dat). The catalogues are based on u-, g- and i- band images from the VST elliptical galaxies survey (VEGAS). Aperture magnitudes, corrected for aperture correction are reported. We also provide the full catalogs of matched sources, which also include the matched background and foreground sources in the frames (ngc3115_full.dat and ngc1399_full.dat). (4 data files).
Model-independent analysis of semileptonic B decays to D** for arbitrary new physics
NASA Astrophysics Data System (ADS)
Bernlochner, Florian U.; Ligeti, Zoltan; Robinson, Dean J.
2018-04-01
We explore semileptonic B decays to the four lightest excited charm mesons, D**={D0*,D1* ,D1 ,D2*} , for nonzero charged lepton mass and for all b →c ℓν ¯ four-Fermi interactions, including calculation of the O (ΛQCD/mc ,b) and O (αs) corrections to the heavy quark limit for all form factors. In the heavy quark limit, some form factors are suppressed at zero recoil; therefore, the O (ΛQCD/mc ,b) corrections can be very important. The D** rates exhibit sensitivities to new physics in b →c τ ν ¯ mediated decays complementary to the D and D* modes. Since they are also important backgrounds to B →D(*)τ ν ¯, the correct interpretation of future semitauonic B →D(*) rate measurements requires consistent treatment of both the D** backgrounds and the signals. Our results allow more precise and more reliable calculations of these B →D**ℓν ¯ decays and are systematically improvable by better data on the e and μ modes. As an example, we show that the D** rates are more sensitive to a new c ¯ σμ νb tensor interaction than the D(*) rates.
Correction of Spatial Bias in Oligonucleotide Array Data
Lemieux, Sébastien
2013-01-01
Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083
Disselhorst, Jonathan A; Bezrukov, Ilja; Kolb, Armin; Parl, Christoph; Pichler, Bernd J
2014-06-01
Hybrid PET/MR systems have rapidly progressed from the prototype stage to systems that are increasingly being used in the clinics. This review provides an overview of developments in hybrid PET/MR systems and summarizes the current state of the art in PET/MR instrumentation, correction techniques, and data analysis. The strong magnetic field requires considerable changes in the manner by which PET images are acquired and has led, among others, to the development of new PET detectors, such as silicon photomultipliers. During more than a decade of active PET/MR development, several system designs have been described. The technical background of combined PET/MR systems is explained and related challenges are discussed. The necessity for PET attenuation correction required new methods based on MR data. Therefore, an overview of recent developments in this field is provided. Furthermore, MR-based motion correction techniques for PET are discussed, as integrated PET/MR systems provide a platform for measuring motion with high temporal resolution without additional instrumentation. The MR component in PET/MR systems can provide functional information about disease processes or brain function alongside anatomic images. Against this background, we point out new opportunities for data analysis in this new field of multimodal molecular imaging. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
A review of the perceptual effects of hearing loss for frequencies above 3 kHz.
Moore, Brian C J
2016-12-01
Hearing loss caused by exposure to intense sounds usually has its greatest effects on audiometric thresholds at 4 and 6 kHz. However, in several countries compensation for occupational noise-induced hearing loss is calculated using the average of audiometric thresholds for selected frequencies up to 3 kHz, based on the implicit assumption that hearing loss for frequencies above 3 kHz has no material adverse consequences. This paper assesses whether this assumption is correct. Studies are reviewed that evaluate the role of hearing for frequencies above 3 kHz. Several studies show that frequencies above 3 kHz are important for the perception of speech, especially when background sounds are present. Hearing at high frequencies is also important for sound localization, especially for resolving front-back confusions. Hearing for frequencies above 3 kHz is important for the ability to understand speech in background sounds and for the ability to localize sounds. The audiometric threshold at 4 kHz and perhaps 6 kHz should be taken into account when assessing hearing in a medico-legal context.
NASA Astrophysics Data System (ADS)
Gruneisen, Mark T.; Sickmiller, Brett A.; Flanagan, Michael B.; Black, James P.; Stoltenberg, Kurt E.; Duchane, Alexander W.
2016-02-01
Spatial filtering is an important technique for reducing sky background noise in a satellite quantum key distribution downlink receiver. Atmospheric turbulence limits the extent to which spatial filtering can reduce sky noise without introducing signal losses. Using atmospheric propagation and compensation simulations, the potential benefit of adaptive optics (AO) to secure key generation (SKG) is quantified. Simulations are performed assuming optical propagation from a low-Earth-orbit satellite to a terrestrial receiver that includes AO. Higher-order AO correction is modeled assuming a Shack-Hartmann wavefront sensor and a continuous-face-sheet deformable mirror. The effects of atmospheric turbulence, tracking, and higher-order AO on the photon capture efficiency are simulated using statistical representations of turbulence and a time-domain wave-optics hardware emulator. SKG rates are calculated for a decoy-state protocol as a function of the receiver field of view for various strengths of turbulence, sky radiances, and pointing angles. The results show that at fields of view smaller than those discussed by others, AO technologies can enhance SKG rates in daylight and enable SKG where it would otherwise be prohibited as a consequence of background optical noise and signal loss due to propagation and turbulence effects.
Güven, Özlem; Sazak, Hilal; Alagöz, Ali; Şavkılıoğlu, Eser; Demirbaş, Çilsem Sevgen; Yıldız, Ali; Karabulut, Erdem
2013-01-01
Background: Many studies focusing on the effects of local anaesthetics on QT intervals have been performed, but the articles evaluating the relationship between thoracic epidural anaesthesia combined with general anaesthesia and QT parameters are very limited. Aims: We aimed to compare the effects of bupivacaine and ropivacaine on QT interval, corrected QT, dispersion of QT, and corrected dispersion of QT in patients undergoing lung resection under thoracic epidural anaesthesia combined with general anaesthesia. Study Design: Prospective clinical study. Methods: Thirty ASA physical status 1–3 patients requiring thoracic epidural anaesthesia combined with general anaesthesia for thoracic surgery. Patients were randomly assigned to two groups, which were allocated to receive either bupivacaine (Group B) or ropivacaine (Group R) during thoracic epidural anaesthesia. Following haemodynamic monitoring, a thoracic epidural catheter was inserted. Local anaesthetic at an average dose of 1.5 mL/ segment was given through an epidural catheter. The same general anaesthesia protocol was administered in both groups. Records and measurements were performed on 10 phases that were between the thoracic epidural catheter insertion to the 5th min of endobronchial intubation. In all phases, systolic arterial pressure, diastolic arterial pressure, mean arterial pressure, heart rate, peripheral O2 saturation, and electrocardiogram monitoring were performed in patients. All QT parameters were recorded by 12-lead electrocardiogram and analysed manually by a cardiologist. Results: QT intervals were similar between two groups. In Group R, corrected QT values at the 20th min of local anaesthetic injection and the 5th min of endobronchial intubation were shorter than those in Group B (p<0.05). The basal dispersion of QT and dispersion of QT values at the 1st min of propofol injection were shorter than those in Group R (p<0.05). The corrected dispersion of QT value at the 1st min of propofol injection was shorter in Group R (p<0.05). In Group R, the decrease in mean arterial pressure at the 1st min of fentanyl injection was significant compared with Group B (p<0.05). There was no significant difference between the groups with respect to heart rate and complications. Conclusion: The corrected QT, dispersion of QT, and corrected dispersion of QT intervals were slightly longer in the patients receiving bupivacaine compared with those receiving ropivacaine in various phases of the present study. PMID:25207150
Lee, Ji Won; Kim, Chang Won; Lee, Geewon; Lee, Han Cheol; Kim, Sang-Pil; Choi, Bum Sung; Jeong, Yeon Joo
2018-02-01
Background Using the hybrid electrocardiogram (ECG)-gated computed tomography (CT) technique, assessment of entire aorta, coronary arteries, and aortic valve can be possible using single-bolus contrast administration within a single acquisition. Purpose To compare the image quality of hybrid ECG-gated and non-gated CT angiography of the aorta and evaluate the effect of a motion correction algorithm (MCA) on coronary artery image quality in a hybrid ECG-gated aorta CT group. Material and Methods In total, 104 patients (76 men; mean age = 65.8 years) prospectively randomized into two groups (Group 1 = hybrid ECG-gated CT; Group 2 = non-gated CT) underwent wide-detector array aorta CT. Image quality, assessed using a four-point scale, was compared between the groups. Coronary artery image quality was compared between the conventional reconstruction and motion correction reconstruction subgroups in Group 1. Results Group 1 showed significant advantages over Group 2 in aortic wall, cardiac chamber, aortic valve, coronary ostia, and main coronary arteries image quality (all P < 0.001). All Group 1 patients had diagnostic image quality of the aortic wall and left ostium. The MCA significantly improved the image quality of the three main coronary arteries ( P < 0.05). Moreover, per-vessel interpretability improved from 92.3% to 97.1% with the MCA ( P = 0.013). Conclusion Hybrid ECG-gated CT significantly improved the heart and aortic wall image quality and the MCA can further improve the image quality and interpretability of coronary arteries.
Assessment of microclimate conditions under artificial shades in a ginseng field
Lee, Kyu Jong; Lee, Byun-Woo; Kang, Je Yong; Lee, Dong Yun; Jang, Soo Won; Kim, Kwang Soo
2015-01-01
Background Knowledge on microclimate conditions under artificial shades in a ginseng field would facilitate climate-aware management of ginseng production. Methods Weather data were measured under the shade and outside the shade at two fields located in Gochang-gun and Jeongeup-si, Korea, in 2011 and 2012 seasons to assess temperature and humidity conditions under the shade. An empirical approach was developed and validated for the estimation of leaf wetness duration (LWD) using weather measurements outside the shade as inputs to the model. Results Air temperature and relative humidity were similar between under the shade and outside the shade. For example, temperature conditions favorable for ginseng growth, e.g., between 8°C and 27°C, occurred slightly less frequently in hours during night times under the shade (91%) than outside (92%). Humidity conditions favorable for development of a foliar disease, e.g., relative humidity > 70%, occurred slightly more frequently under the shade (84%) than outside (82%). Effectiveness of correction schemes to an empirical LWD model differed by rainfall conditions for the estimation of LWD under the shade using weather measurements outside the shade as inputs to the model. During dew eligible days, a correction scheme to an empirical LWD model was slightly effective (10%) in reducing estimation errors under the shade. However, another correction approach during rainfall eligible days reduced errors of LWD estimation by 17%. Conclusion Weather measurements outside the shade and LWD estimates derived from these measurements would be useful as inputs for decision support systems to predict ginseng growth and disease development. PMID:26843827
Next-to-leading-order QCD and electroweak corrections to WWW production at proton-proton colliders
NASA Astrophysics Data System (ADS)
Dittmaier, Stefan; Huss, Alexander; Knippen, Gernot
2017-09-01
Triple-W-boson production in proton-proton collisions allows for a direct access to the triple and quartic gauge couplings and provides a window to the mechanism of electroweak symmetry breaking. It is an important process to test the Standard Model (SM) and might be background to physics beyond the SM. We present a calculation of the next-to-leading order (NLO) electroweak corrections to the production of WWW final states at proton-proton colliders with on-shell W bosons and combine the electroweak with the NLO QCD corrections. We study the impact of the corrections to the integrated cross sections and to kinematic distributions of the W bosons. The electroweak corrections are generically of the size of 5-10% for integrated cross sections and become more pronounced in specific phase-space regions. The real corrections induced by quark-photon scattering turn out to be as important as electroweak loops and photon bremsstrahlung corrections, but can be reduced by phase-space cuts. Considering that prior determinations of the photon parton distribution function (PDF) involve rather large uncertainties, we compare the results obtained with different photon PDFs and discuss the corresponding uncertainties in the NLO predictions. Moreover, we determine the scale and total PDF uncertainties at the LHC and a possible future 100 TeV pp collider.
The hidden dynamics of relativistic electrons (0.7-1.5 MeV) in the inner zone and slot region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Claudepierre, Seth G.; O'Brien, T. P.; Fennell, J. F.
We present measurements of relativistic electrons (0.7–1.5 MeV) in the inner zone and slot region obtained by the Magnetic Electron and Ion Spectrometer (MagEIS) instrument on Van Allen Probes. The data presented are corrected for background contamination, which is primarily due to inner-belt protons in these low-L regions. We find that ~1 MeV electrons were transported into the inner zone following the two largest geomagnetic storms of the Van Allen Probes era to date, the March and June 2015 events. As ~1 MeV electrons were not observed in Van Allen Probes data in the inner zone prior to these twomore » events, the injections created a new inner belt that persisted for at least 1.5 years. In contrast, we find that electrons injected into the slot region decay on much faster timescales, approximately tens of days. Furthermore, we find no evidence of >1.5 MeV electrons in the inner zone during the entire time interval considered (April 2013 through September 2016). The energies we examine thus span a transition range in the steeply falling inner zone electron spectrum, where modest intensities are observed at 0.7 MeV, and no electrons are observed at 1.5 MeV. To validate the results obtained from the background corrected flux measurements, we also present detailed pulse-height spectra from individual MagEIS detectors. These measurements confirm our results and also reveal low-intensity inner zone and slot region electrons that are not captured in the standard background corrected data product. Lastly, we briefly discuss efforts to refine the upper limit of inner zone MeV electron flux obtained in earlier work.« less
The hidden dynamics of relativistic electrons (0.7-1.5 MeV) in the inner zone and slot region
Claudepierre, Seth G.; O'Brien, T. P.; Fennell, J. F.; ...
2017-03-15
We present measurements of relativistic electrons (0.7–1.5 MeV) in the inner zone and slot region obtained by the Magnetic Electron and Ion Spectrometer (MagEIS) instrument on Van Allen Probes. The data presented are corrected for background contamination, which is primarily due to inner-belt protons in these low-L regions. We find that ~1 MeV electrons were transported into the inner zone following the two largest geomagnetic storms of the Van Allen Probes era to date, the March and June 2015 events. As ~1 MeV electrons were not observed in Van Allen Probes data in the inner zone prior to these twomore » events, the injections created a new inner belt that persisted for at least 1.5 years. In contrast, we find that electrons injected into the slot region decay on much faster timescales, approximately tens of days. Furthermore, we find no evidence of >1.5 MeV electrons in the inner zone during the entire time interval considered (April 2013 through September 2016). The energies we examine thus span a transition range in the steeply falling inner zone electron spectrum, where modest intensities are observed at 0.7 MeV, and no electrons are observed at 1.5 MeV. To validate the results obtained from the background corrected flux measurements, we also present detailed pulse-height spectra from individual MagEIS detectors. These measurements confirm our results and also reveal low-intensity inner zone and slot region electrons that are not captured in the standard background corrected data product. Lastly, we briefly discuss efforts to refine the upper limit of inner zone MeV electron flux obtained in earlier work.« less
Neudecker, Denise; Taddeucci, Terry Nicholas; Haight, Robert Cameron; ...
2016-01-06
The spectrum of neutrons emitted promptly after 239Pu(n,f)—a so-called prompt fission neutron spectrum (PFNS)—is a quantity of high interest, for instance, for reactor physics and global security. However, there are only few experimental data sets available that are suitable for evaluations. In addition, some of those data sets differ by more than their 1-σ uncertainty boundaries. We present the results of MCNP studies indicating that these differences are partly caused by underestimated multiple scattering contributions, over-corrected background, and inconsistent deconvolution methods. A detailed uncertainty quantification for suitable experimental data was undertaken including these effects, and test-evaluations were performed with themore » improved uncertainty information. The test-evaluations illustrate that the inadequately estimated effects and detailed uncertainty quantification have an impact on the evaluated PFNS and associated uncertainties as well as the neutron multiplicity of selected critical assemblies. A summary of data and documentation needs to improve the quality of the experimental database is provided based on the results of simulations and test-evaluations. Furthermore, given the possibly substantial distortion of the PFNS by multiple scattering and background effects, special care should be taken to reduce these effects in future measurements, e.g., by measuring the 239Pu PFNS as a ratio to either the 235U or 252Cf PFNS.« less
Central Stars of Planetary Nebulae in the LMC
NASA Technical Reports Server (NTRS)
Bianchi, Luciana
2004-01-01
In FUSE cycle 2's program B001 we studied Central Stars of Planetary Nebulae (CSPN) in the Large Magellanic Could. All FUSE observations have been successfully completed and have been reduced, analyzed and published. The analysis and the results are summarized below. The FUSE data were reduced using the latest available version of the FUSE calibration pipeline (CALFUSE v2.2.2). The flux of these LMC post-AGB objects is at the threshold of FUSE's sensitivity, and thus special care in the background subtraction was needed during the reduction. Because of their faintness, the targets required many orbit-long exposures, each of which typically had low (target) count-rates. Each calibrated extracted sequence was checked for unacceptable count-rate variations (a sign of detector drift), misplaced extraction windows, and other anomalies. All the good calibrated exposures were combined using FUSE pipeline routines. The default FUSE pipeline attempts to model the background measured off-target and subtracts it from the target spectrum. We found that, for these faint objects, the background appeared to be over-estimated by this method, particularly at shorter wavelengths (i.e., < 1000 A). We therefore tried two other reductions. In the first method, subtraction of the measured background is turned off and and the background is taken to be the model scattered-light scaled by the exposure time. In the second one, the first few steps of the pipeline were run on the individual exposures (correcting for effects unique to each exposure such as Doppler shift, grating motions, etc). Then the photon lists from the individual exposures were combined, and the remaining steps of the pipeline run on the combined file. Thus, more total counts for both the target and background allowed for a better extraction.
NASA Astrophysics Data System (ADS)
Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.
2016-10-01
Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally, generally useful wavelength ranges are determined and the optimal amount of principal components is analyzed.
Optimization of yttrium-90 PET for simultaneous PET/MR imaging: A phantom study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldib, Mootaz
2016-08-15
Purpose: Positron emission tomography (PET) imaging of yttrium-90 in the liver post radioembolization has been shown useful for personalized dosimetry calculations and evaluation of extrahepatic deposition. The purpose of this study was to quantify the benefits of several MR-based data correction approaches offered by using a combined PET/MR system to improve Y-90 PET imaging. In particular, the feasibility of motion and partial volume corrections were investigated in a controlled phantom study. Methods: The ACR phantom was filled with an initial concentration of 8 GBq of Y-90 solution resulting in a contrast of 10:1 between the hot cylinders and the background.more » Y-90 PET motion correction through motion estimates from MR navigators was evaluated by using a custom-built motion stage that simulated realistic amplitudes of respiration-induced liver motion. Finally, the feasibility of an MR-based partial volume correction method was evaluated using a wavelet decomposition approach. Results: Motion resulted in a large (∼40%) loss of contrast recovery for the 8 mm cylinder in the phantom, but was corrected for after MR-based motion correction was applied. Partial volume correction improved contrast recovery by 13% for the 8 mm cylinder. Conclusions: MR-based data correction improves Y-90 PET imaging on simultaneous PET/MR systems. Assessment of these methods must be studied further in the clinical setting.« less
Minamimoto, Ryogo; Mitsumoto, Takuya; Miyata, Yoko; Sunaoka, Fumio; Morooka, Miyako; Okasaki, Momoko; Iagaru, Andrei; Kubota, Kazuo
2016-02-01
This study evaluated the potential of Q.Freeze algorithm for reducing motion artifacts, in comparison with ungated imaging (UG) and respiratory-gated imaging (RG). Twenty-nine patients with 53 lesions who had undergone RG F-FDG PET/CT were included in this study. Using PET list mode data, five series of PET images [UG, RG, and QF images with an acquisition duration of 3 min (QF3), 5 min (QF5), and 10 min (QF10)] were reconstructed retrospectively. The image quality was evaluated first. Next, quantitative metrics [maximum standardized uptake value (SUVmax), mean standardized uptake value (SUVmean), SD, metabolic tumor volume, signal to noise ratio, or lesion to background ratio] were calculated for the liver, background, and each lesion, and the results were compared across the series. QF10 and QF5 showed better image quality compared with all other images. SUVmax in the liver, background, and lesions was lower with QF10 and QF5 than with the others, but there were no statistically significant differences in SUVmean and the lesion to background ratios. The SD with UG and RG was significantly higher than that with QF5 and QF10. The metabolic tumor volume in QF3 and QF5 was significantly lower than that in UG. The Q.Freeze algorithm can improve the quality of PET imaging compared with RG and UG.
Adult smokers' responses to "corrective statements" regarding tobacco industry deception.
Kollath-Cattano, Christy L; Abad-Vivero, Erika N; Thrasher, James F; Bansal-Travers, Maansi; O'Connor, Richard J; Krugman, Dean M; Berg, Carla J; Hardin, James W
2014-07-01
To inform consumers, U.S. Federal Courts have ordered the tobacco industry to disseminate "corrective statements" (CSs) about their deception regarding five topics: smoker health effects, nonsmoker health effects, cigarette addictiveness, design of cigarettes to increase addiction, and relative safety of light cigarettes. To determine how smokers from diverse backgrounds respond to the final, court-mandated wording of these CSs. Data were analyzed from an online consumer panel of 1,404 adult smokers who evaluated one of five CS topics (n=280-281) by reporting novelty, relevance, anger at the industry, and motivation to quit because of the CS. Logistic and linear regression models assessed main and interactive effects of race/ethnicity, gender, education, and CS topic on these responses. Data were collected in January 2013 and analyzed in March 2013. Thirty percent to 54% of participants reported that each CS provided novel information, and novelty was associated with greater relevance, anger at the industry, and motivation to quit because of the message. African Americans and Latinos were more likely than non-Hispanic whites to report that CSs were novel, and they had stronger responses to CSs across all indicators. Compared to men, women reported that CSs were more relevant and motivated them to quit. This study suggests that smokers would value and respond to CSs, particularly smokers from groups that suffer from tobacco-related health disparities. Copyright © 2014. Published by Elsevier Inc.
2010-01-01
Background Multilevel spinal fusion surgery has typically been associated with significant blood loss. To limit both the need for transfusions and co-morbidities associated with blood loss, the use of anti-fibrinolytic agents has been proposed. While there is some literature comparing the effectiveness of tranexamic acid (TXA) to epsilon aminocaproic acid (EACA) in cardiac procedures, there is currently no literature directly comparing TXA to EACA in orthopedic surgery. Methods/Design Here we propose a prospective, randomized, double-blinded control study evaluating the effects of TXA, EACA, and placebo for treatment of adolescent idiopathic scoliosis (AIS), neuromuscular scoliosis (NMS), and adult deformity (AD) via corrective spinal surgery. Efficacy will be determined by intraoperative and postoperative blood loss. Other clinical outcomes that will be compared include transfusion rates, preoperative and postoperative hemodynamic values, and length of hospital stay after the procedure. Discussion The primary goal of the study is to determine perioperative blood loss as a measure of the efficacy of TXA, EACA, and placebo. Based on current literature and the mechanism by which the medications act, we hypothesize that TXA will be more effective at reducing blood loss than EACA or placebo and result in improved patient outcomes. Trial Registration ClinicalTrials.gov ID: NCT00958581 PMID:20370916
The effect of colour congruency on shape discriminations of novel objects.
Nicholson, Karen G; Humphrey, G Keith
2004-01-01
Although visual object recognition is primarily shape driven, colour assists the recognition of some objects. It is unclear, however, just how colour information is coded with respect to shape in long-term memory and how the availability of colour in the visual image facilitates object recognition. We examined the role of colour in the recognition of novel, 3-D objects by manipulating the congruency of object colour across the study and test phases, using an old/new shape-identification task. In experiment 1, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented in their original colour, rather than in a different colour. In experiments 2 and 3, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented with their original part-colour conjunctions, rather than in different or in reversed part-colour conjunctions. In experiment 4, we found that participants were quite poor at the verbal recall of part-colour conjunctions for correctly identified old objects, presented as grey-scale images at test. In experiment 5, we found that participants were significantly slower at correctly identifying old objects when object colour was incongruent across study and test, than when background colour was incongruent across study and test. The results of these experiments suggest that both shape and colour information are stored as part of the long-term representation of these novel objects. Results are discussed in terms of how colour might be coded with respect to shape in stored object representations.
Geometric correction method for 3d in-line X-ray phase contrast image reconstruction
2014-01-01
Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768
Nunn, Amy; Zaller, Nickolas; Dickman, Samuel; Trimbur, Catherine; Nijhawan, Ank; Rich, Josiah D.
2009-01-01
Background More than 50% of incarcerated individuals have a history of substance use, and over 200,000 individuals with heroin addiction pass through American correctional facilities annually. Opiate replacement therapy (ORT) with methadone or buprenorphine is an effective treatment for opiate dependence and can reduce drug-related disease and recidivism for inmates. Provision of ORT is nevertheless a frequently neglected intervention in the correctional setting. Objective and Methods We surveyed the 50 state; Washington, District of Columbia (DC); and Federal Department of Corrections' medical directors or their equivalents about their facilities' ORT prescribing policies and referral programs for inmates leaving prison. Results We received responses from 51 of 52 prison systems nationwide. Twenty-eight prison systems (55%) offer methadone to inmates in some situations. Methadone use varies widely across states: over 50% of correctional facilities that offer methadone do so exclusively for pregnant women or for chronic pain management. Seven states' prison systems (14%) offer buprenorphine to some inmates. The most common reason cited for not offering ORT was that facilities “prefer drug-free detoxification over providing methadone or buprenorphine.” Twenty-three states' prison systems (45%) provide referrals for some inmates to methadone maintenance programs after release, which increased from 8% in 2003; 15 states' prison systems (29%) provide some referrals to community buprenorphine providers. Conclusion Despite demonstrated social, medical, and economic benefits of providing ORT to inmates during incarceration and linkage to ORT upon release, many prison systems nationwide still do not offer pharmacological treatment for opiate addiction or referrals for ORT upon release. PMID:19625142
Cosmological singularities and bounce in Cartan-Einstein theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucat, Stefano; Prokopec, Tomislav, E-mail: s.lucat@students.uu.nl, E-mail: t.prokopec@uu.nl
We consider a generalized Einstein-Cartan theory, in which we add the unique covariant dimension four operators to general relativity that couples fermionic spin current to the torsion tensor (with an arbitrary strength). Since torsion is local and non-dynamical, when integrated out it yields an effective four-fermion interaction of the gravitational strength. We show how to renormalize the theory, in the one-loop perturbative expansion in generally curved space-times, obtaining the first order correction to the 2PI effective action in Schwinger-Keldysh ( in-in ) formalism. We then apply the renormalized theory to study the dynamics of a collapsing universe that begins inmore » a thermal state and find that—instead of a big crunch singularity—the Universe with torsion undergoes a bounce . We solve the dynamical equations (a) classically (without particle production); (b) including the production of fermions in a fixed background in the Hartree-Fock approximation and (c) including the quantum backreaction of fermions onto the background space-time. In the first and last cases the Universe undergoes a bounce. The production of fermions due to the coupling to a contracting homogeneous background speeds up the bounce, implying that the quantum contributions from fermions is negative, presumably because fermion production contributes negatively to the energy-momentum tensor. When compared with former works on the subject, our treatment is fully microscopic (namely, we treat fermions by solving the corresponding Dirac equations) and quantum (in the sense that we include fermionic loop contributions).« less
Cosmological singularities and bounce in Cartan-Einstein theory
NASA Astrophysics Data System (ADS)
Lucat, Stefano; Prokopec, Tomislav
2017-10-01
We consider a generalized Einstein-Cartan theory, in which we add the unique covariant dimension four operators to general relativity that couples fermionic spin current to the torsion tensor (with an arbitrary strength). Since torsion is local and non-dynamical, when integrated out it yields an effective four-fermion interaction of the gravitational strength. We show how to renormalize the theory, in the one-loop perturbative expansion in generally curved space-times, obtaining the first order correction to the 2PI effective action in Schwinger-Keldysh (in-in) formalism. We then apply the renormalized theory to study the dynamics of a collapsing universe that begins in a thermal state and find that—instead of a big crunch singularity—the Universe with torsion undergoes a bounce. We solve the dynamical equations (a) classically (without particle production); (b) including the production of fermions in a fixed background in the Hartree-Fock approximation and (c) including the quantum backreaction of fermions onto the background space-time. In the first and last cases the Universe undergoes a bounce. The production of fermions due to the coupling to a contracting homogeneous background speeds up the bounce, implying that the quantum contributions from fermions is negative, presumably because fermion production contributes negatively to the energy-momentum tensor. When compared with former works on the subject, our treatment is fully microscopic (namely, we treat fermions by solving the corresponding Dirac equations) and quantum (in the sense that we include fermionic loop contributions).
Non-Gaussian bias: insights from discrete density peaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch
2013-09-01
Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less
Patiraki, Elisabeth I; Papathanassoglou, Elizabeth D E; Tafas, Cheryl; Akarepi, Vasiliki; Katsaragakis, Stelios G; Kampitsi, Anjuleta; Lemonidou, Chrysoula
2006-12-01
The purpose of this randomized controlled study was to explore the effectiveness of an educational intervention on nurses' attitudes and knowledge regarding pain management and to explore associations with nurses' characteristics. A four Solomon group experimental design was employed to assess the effect of the intervention and potential effects of pre-intervention testing. One hundred and twelve nurses were randomized to two intervention and two control groups. The intervention was based on viewing a series of educational videotapes and case scenarios. The Validated Hellenic version of the Nurses Knowledge and Attitudes Survey Regarding Pain (GV-NKASRP) was used. Pre-intervention scores revealed various limitations in regard to pain assessment and management. At the pre-test, the average number of correct answers was 17.58+/-7.58 (45.1%+/-19.3% of total questions). Pre-intervention scores differed significantly among participants with different educational backgrounds (P < 0.0001). A significant effect of pain education on total knowledge scores as well as regarding specific questions was detected. Intervention group participants provided 6.11+/-5.55 additional correct answers (15.66%+/-14.23% improvement, P < 0.0001), and they exhibited significantly improved post-test scores compared to controls (26.49+/-5.24 vs. 18.75+/-4.48; P < 0.0001). A potential negative effect of pre-test on knowledge gain for specific items and for total scores was detected. These findings suggest low pre-test knowledge scores among Hellenic oncology nurses and a significant effect of the intervention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
Solar Energetic Particle Spectra
NASA Astrophysics Data System (ADS)
Ryan, J. M.; Boezio, M.; Bravar, U.; Bruno, A.; Christian, E. R.; de Nolfo, G. A.; Martucci, M.; Mergè, M.; Munini, R.; Ricci, M.; Sparvoli, R.; Stochaj, S.
2017-12-01
We report updated event-integrated spectra from several SEP events measured with PAMELA. The measurements were made from 2006 to 2014 in the energy range starting at 80 MeV and extending well above the neutron monitor threshold. The PAMELA instrument is in a high inclination, low Earth orbit and has access to SEPs when at high latitudes. Spectra have been assembled from these high-latitude measurements. The field of view of PAMELA is small and during the high-latitude passes it scans a wide range of asymptotic directions as the spacecraft orbits. Correcting for data gaps, solid angle effects and improved background corrections, we have compiled event-integrated intensity spectra for twenty-eight SEP events. Where statistics permit, the spectra exhibit power law shapes in energy with a high-energy exponential roll over. The events analyzed include two genuine ground level enhancements (GLE). In those cases the roll-over energy lies above the neutron monitor threshold ( 1 GV) while the others are lower. We see no qualitative difference between the spectra of GLE vs. non-GLE events, i.e., all roll over in an exponential fashion with rapidly decreasing intensity at high energies.
The Chandra Source Catalog: Algorithms
NASA Astrophysics Data System (ADS)
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
NASA Astrophysics Data System (ADS)
Lehmann, I.; Scholz, R.-D.
1997-04-01
We present new tidal radii for seven Galactic globular clusters using the method of automated star counts on Schmidt plates of the Tautenburg, Palomar and UK telescopes. The plates were fully scanned with the APM system in Cambridge (UK). Special account was given to a reliable background subtraction and the correction of crowding effects in the central cluster region. For the latter we used a new kind of crowding correction based on a statistical approach to the distribution of stellar images and the luminosity function of the cluster stars in the uncrowded area. The star counts were correlated with surface brightness profiles of different authors to obtain complete projected density profiles of the globular clusters. Fitting an empirical density law (King 1962) we derived the following structural parameters: tidal radius r_t_, core radius r_c_ and concentration parameter c. In the cases of NGC 5466, M 5, M 12, M 13 and M 15 we found an indication for a tidal tail around these objects (cf. Grillmair et al. 1995).
VizieR Online Data Catalog: Tidal radii of 7 globular clusters (Lehmann+ 1997)
NASA Astrophysics Data System (ADS)
Lehmann, I.; Scholz, R.-D.
1998-02-01
We present new tidal radii for seven Galactic globular clusters using the method of automated star counts on Schmidt plates of the Tautenburg, Palomar and UK telescopes. The plates were fully scanned with the APM system in Cambridge (UK). Special account was given to a reliable background subtraction and the correction of crowding effects in the central cluster region. For the latter we used a new kind of crowding correction based on a statistical approach to the distribution of stellar images and the luminosity function of the cluster stars in the uncrowded area. The star counts were correlated with surface brightness profiles of different authors to obtain complete projected density profiles of the globular clusters. Fitting an empirical density law (King 1962AJ.....67..471K) we derived the following structural parameters: tidal radius rt, core radius rc and concentration parameter c. In the cases of NGC 5466, M 5, M 12, M 13 and M 15 we found an indication for a tidal tail around these objects (cf. Grillmair et al., 1995AJ....109.2553G). (1 data file).
Development of digital interactive processing system for NOAA satellites AVHRR data
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Murthy, N. N.
The paper discusses the digital image processing system for NOAA/AVHRR data including Land applications - configured around VAX 11/750 host computer supported with FPS 100 Array Processor, Comtal graphic display and HP Plotting devices; wherein the system software for relational Data Base together with query and editing facilities, Man-Machine Interface using form, menu and prompt inputs including validation of user entries for data type and range; preprocessing software for data calibration, Sun-angle correction, Geometric Corrections for Earth curvature effect and Earth rotation offsets and Earth location of AVHRR image have been accomplished. The implemented image enhancement techniques such as grey level stretching, histogram equalization and convolution are discussed. The software implementation details for the computation of vegetative index and normalized vegetative index using NOAA/AVHRR channels 1 and 2 data together with output are presented; scientific background for such computations and obtainability of similar indices from Landsat/MSS data are also included. The paper concludes by specifying the further software developments planned and the progress envisaged in the field of vegetation index studies.
Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia
2014-01-01
Background Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Methods Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Results Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Conclusions Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia. PMID:24739356
Does Planck really rule out monomial inflation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Kari; Karčiauskas, Mindaugas, E-mail: kari.enqvist@helsinki.fi, E-mail: mindaugas.karciauskas@helsinki.fi
2014-02-01
We consider the modifications of monomial chaotic inflation models due to radiative corrections induced by inflaton couplings to bosons and/or fermions necessary for reheating. To the lowest order, ignoring gravitational corrections and treating the inflaton as a classical background field, they are of the Coleman-Weinberg type and parametrized by the renormalization scale μ. In cosmology, there are not enough measurements to fix μ so that we end up with a family of models, each having a slightly different slope of the potential. We demonstrate by explicit calculation that within the family of chaotic φ{sup 2} models, some may be ruledmore » out by Planck whereas some remain perfectly viable. In contrast, radiative corrections do not seem to help chaotic φ{sup 4} models to meet the Planck constraints.« less
ISR corrections to associated HZ production at future Higgs factories
NASA Astrophysics Data System (ADS)
Greco, Mario; Montagna, Guido; Nicrosini, Oreste; Piccinini, Fulvio; Volpi, Gabriele
2018-02-01
We evaluate the QED corrections due to initial state radiation (ISR) to associated Higgs boson production in electron-positron (e+e-) annihilation at typical energies of interest for the measurement of the Higgs properties at future e+e- colliders, such as CEPC and FCC-ee. We apply the QED Structure Function approach to the four-fermion production process e+e- →μ+μ- b b bar , including both signal and background contributions. We emphasize the relevance of the ISR corrections particularly near threshold and show that finite third order collinear contributions are mandatory to meet the expected experimental accuracy. We analyze in turn the rôle played by a full four-fermion calculation and beam energy spread in precision calculations for Higgs physics at future e+e- colliders.
The new Landsat 8 potential for remote sensing of colored dissolved organic matter (CDOM)
Slonecker, Terry; Jones, Daniel K.; Pellerin, Brian A.
2016-01-01
Due to a combination of factors, such as a new coastal/aerosol band and improved radiometric sensitivity of the Operational Land Imager aboard Landsat 8, the atmospherically-corrected Surface Reflectance product for Landsat data, and the growing availability of corrected fDOM data from U.S. Geological Survey gaging stations, moderate-resolution remote sensing of fDOM may now be achievable. This paper explores the background of previous efforts and shows preliminary examples of the remote sensing and data relationships between corrected fDOM and Landsat 8 reflectance values. Although preliminary results before and after Hurricane Sandy are encouraging, more research is needed to explore the full potential of Landsat 8 to continuously map fDOM in a number of water profiles.
The new Landsat 8 potential for remote sensing of colored dissolved organic matter (CDOM).
Slonecker, E Terrence; Jones, Daniel K; Pellerin, Brian A
2016-06-30
Due to a combination of factors, such as a new coastal/aerosol band and improved radiometric sensitivity of the Operational Land Imager aboard Landsat 8, the atmospherically-corrected Surface Reflectance product for Landsat data, and the growing availability of corrected fDOM data from U.S. Geological Survey gaging stations, moderate-resolution remote sensing of fDOM may now be achievable. This paper explores the background of previous efforts and shows preliminary examples of the remote sensing and data relationships between corrected fDOM and Landsat 8 reflectance values. Although preliminary results before and after Hurricane Sandy are encouraging, more research is needed to explore the full potential of Landsat 8 to continuously map fDOM in a number of water profiles. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-01
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
Comment on 'Can infrared gravitons screen {lambda}?'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsamis, N. C.; Woodard, R. P.; Department of Physics, University of Florida, Gainesville, Florida 32611
2008-07-15
We reply to the recent criticism by Garriga and Tanaka of our proposal that quantum gravitational loop corrections may lead to a secular screening of the effective cosmological constant. Their argument rests upon a renormalization scheme in which the composite operator (R{radical}(-g)-4{lambda}{radical}(-g)){sub ren} is defined to be the trace of the renormalized field equations. Although this is a peculiar prescription, we show that it does not preclude secular screening. Moreover, we show that a constant Ricci scalar does not even classically imply a constant expansion rate. Other important points are: (1) the quantity R{sub ren} of Garriga and Tanaka ismore » neither a properly defined composite operator, nor is it constant; (2) gauge dependence does not render a Green's function devoid of physical content; (3) scalar models on a nondynamical de Sitter background (for which there is no gauge issue) can induce arbitrarily large secular contributions to the stress tensor; (4) the same secular corrections appear in observable quantities in quantum gravity; and (5) the prospects seem good for deriving a simple stochastic formulation of quantum gravity in which the leading secular effects can be summed and for which the expectation values of even complicated, gauge invariant operators can be computed at leading order.« less
Testing and selection of cosmological models with (1+z){sup 6} corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szydlowski, Marek; Marc Kac Complex Systems Research Centre, Jagiellonian University, ul. Reymonta 4, 30-059 Cracow; Godlowski, Wlodzimierz
2008-02-15
In the paper we check whether the contribution of (-)(1+z){sup 6} type in the Friedmann equation can be tested. We consider some astronomical tests to constrain the density parameters in such models. We describe different interpretations of such an additional term: geometric effects of loop quantum cosmology, effects of braneworld cosmological models, nonstandard cosmological models in metric-affine gravity, and models with spinning fluid. Kinematical (or geometrical) tests based on null geodesics are insufficient to separate individual matter components when they behave like perfect fluid and scale in the same way. Still, it is possible to measure their overall effect. Wemore » use recent measurements of the coordinate distances from the Fanaroff-Riley type IIb radio galaxy data, supernovae type Ia data, baryon oscillation peak and cosmic microwave background radiation observations to obtain stronger bounds for the contribution of the type considered. We demonstrate that, while {rho}{sup 2} corrections are very small, they can be tested by astronomical observations--at least in principle. Bayesian criteria of model selection (the Bayesian factor, AIC, and BIC) are used to check if additional parameters are detectable in the present epoch. As it turns out, the {lambda}CDM model is favored over the bouncing model driven by loop quantum effects. Or, in other words, the bounds obtained from cosmography are very weak, and from the point of view of the present data this model is indistinguishable from the {lambda}CDM one.« less
NASA Technical Reports Server (NTRS)
Hasselfield, Matthew; Moodley, Kavilan; Bond, J. Richard; Das, Sudeep; Devlin, Mark J.; Dunkley, Joanna; Dunner, Rolando; Fowler, Joseph W.; Gallardo, Patricio; Gralla, Megan B.;
2013-01-01
We describe the measurement of the beam profiles and window functions for the Atacama Cosmology Telescope (ACT), which operated from 2007 to 2010 with kilopixel bolometer arrays centered at 148, 218, and 277 GHz. Maps of Saturn are used to measure the beam shape in each array and for each season of observations. Radial profiles are transformed to Fourier space in a way that preserves the spatial correlations in the beam uncertainty to derive window functions relevant for angular power spectrum analysis. Several corrections are applied to the resulting beam transforms, including an empirical correction measured from the final cosmic microwave background (CMB) survey maps to account for the effects of mild pointing variation and alignment errors. Observations of Uranus made regularly throughout each observing season are used to measure the effects of atmospheric opacity and to monitor deviations in telescope focus over the season. Using the WMAP-based calibration of the ACT maps to the CMB blackbody, we obtain precise measurements of the brightness temperatures of the Uranus and Saturn disks at effective frequencies of 149 and 219 GHz. For Uranus we obtain thermodynamic brightness temperatures T(149/U) = 106.7 +/- 2.2 K and T(219/U) = 100.1 +/- 3.1 K. For Saturn, we model the effects of the ring opacity and emission using a simple model and obtain resulting (unobscured) disk temperatures of T(149/S) = 137.3 +/- 3.2 K and T(219/S) = 137.3 +/- 4.7 K.
Artificial Intelligence in Mitral Valve Analysis
Jeganathan, Jelliffe; Knio, Ziyad; Amador, Yannis; Hai, Ting; Khamooshian, Arash; Matyal, Robina; Khabbaz, Kamal R; Mahmood, Feroze
2017-01-01
Background: Echocardiographic analysis of mitral valve (MV) has become essential for diagnosis and management of patients with MV disease. Currently, the various software used for MV analysis require manual input and are prone to interobserver variability in the measurements. Aim: The aim of this study is to determine the interobserver variability in an automated software that uses artificial intelligence for MV analysis. Settings and Design: Retrospective analysis of intraoperative three-dimensional transesophageal echocardiography data acquired from four patients with normal MV undergoing coronary artery bypass graft surgery in a tertiary hospital. Materials and Methods: Echocardiographic data were analyzed using the eSie Valve Software (Siemens Healthcare, Mountain View, CA, USA). Three examiners analyzed three end-systolic (ES) frames from each of the four patients. A total of 36 ES frames were analyzed and included in the study. Statistical Analysis: A multiple mixed-effects ANOVA model was constructed to determine if the examiner, the patient, and the loop had a significant effect on the average value of each parameter. A Bonferroni correction was used to correct for multiple comparisons, and P = 0.0083 was considered to be significant. Results: Examiners did not have an effect on any of the six parameters tested. Patient and loop had an effect on the average parameter value for each of the six parameters as expected (P < 0.0083 for both). Conclusion: We were able to conclude that using automated analysis, it is possible to obtain results with good reproducibility, which only requires minimal user intervention. PMID:28393769
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-01-01
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-12-27
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.
Exospheric temperatures deduced from 7320- to 7330-A /O/+//2P/ - O/+//2D// twilight observations
NASA Technical Reports Server (NTRS)
Yee, J. H.; Abreu, V. J.
1982-01-01
A technique developed to deduce exospheric temperatures from the 7320- to 7330-A emission measured by the visible airglow experiment on board the AE-E satellite is considered. An excess emission in the measured 7320- to 7330-A brightness is noticed as a result of the interaction between the spacecraft and the atmosphere. The observed brightnesses are corrected for this effect. The galactic background emission is also carefully subtracted. The deduced temperatures exhibit a positive correlation with solar activity. It varies from approximately 700 K in late 1976 to approximately 1700 K at the peak of this solar cycle. The presence of a nonthermal oxygen corona is considered inconclusive.
2014-01-01
Background Improper workstation, work procedures and tools are found to be the risk factors for the development of musculoskeletal disorders among the informal sector workers of the developing countries. Low cost ergonomic interventions can effectively improve such adverse conditions. Case presentation In the present article some studies related to design interventions in different informal and agricultural sectors were discussed and their efficacies were analyzed. It was observed that with the help of appropriate interventions musculoskeletal disorders were reduced, adverse physiological conditions were improved when awkward postures were corrected and ultimately the organisational productivity was increased. Conclusion Proper implementation of ergonomic interventions can ultimately improve the economy of the nation. PMID:25009740
In situ monitoring of tracer tests: how to distinguish tracer recovery from natural background
NASA Astrophysics Data System (ADS)
Bailly-Comte, V.; Durepaire, X.; Batiot-Guilhe, C.; Schnegg, P.-A.
2018-03-01
Hydrogeological tracer tests are primarily conducted with fluorescent tracers. Field fluorometers make it possible to monitor tracers at very low concentrations (<1 ppb) and at high frequency. However, changes in natural fluorescence at a site resulting from variations of dissolved and suspended inorganic and organic material may compromise the measurement of useful signals, thereby limiting the chances of identifying or quantifying the real tracer recovery. An elevated natural signal can mask small concentrations of the tracer while its variability can give the impression of a false recovery. This article shows how the use of a combination of several continuous measurements at different wavelengths allows a better extraction of the natural signal. Field multispectral fluorometers were installed at two Mediterranean karst outlets; both drain carbonate systems but have different environmental conditions. The fluorometers functioned over several hydrologic cycles, in periods affected or not by artificial tracers, making it possible to observe natural signal variations at these sites. The optical properties of this type of field fluorometer were used to calculate the spectral response of the different optics of the measuring probe. These responses, superimposed on three-dimensional excitation/emission matrices produced from laboratory fluorescence measurements, allowed an understanding of what the fluorometer sees under natural flow conditions. The result is an innovative method for correcting artificial tracer results. This type of correction makes it possible to fine-tune the effect of natural background variation on tracer recovery curves for a clear identification of the tracer presence and a more precise quantification of its recovery.
NASA Astrophysics Data System (ADS)
Gärtner, Maria; Mütze, Jörg; Ohrt, Thomas; Schwille, Petra
2009-07-01
In vivo studies of single molecule dynamics by means of Fluorescence correlation spectroscopy can suffer from high background. Fluorescence lifetime correlation spectroscopy provides a tool to distinguish between signal and unwanted contributions via lifetime separation. By studying the motion of the RNA-induced silencing complex (RISC) within two compartments of a human cell, the nucleus and the cytoplasm, we observed clear differences in concentration as well as mobility of the protein complex between those two locations. Especially in the nucleus, where the fluorescence signal is very weak, a correction for background is crucial to provide reliable results of the particle number. Utilizing the fluorescent lifetime of the different contributions, we show that it is possible to distinguish between the fluorescent signal and the autofluorescent background in vivo in a single measurement.
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
Knepp, T; Pippin, M; Crawford, J; Chen, G; Szykman, J; Long, R; Cowen, L; Cede, A; Abuhassan, N; Herman, J; Delgado, R; Compton, J; Berkoff, T; Fishman, J; Martins, D; Stauffer, R; Thompson, A M; Weinheimer, A; Knapp, D; Montzka, D; Lenschow, D; Neil, D
Total-column nitrogen dioxide (NO 2 ) data collected by a ground-based sun-tracking spectrometer system (Pandora) and an photolytic-converter-based in-situ instrument collocated at NASA's Langley Research Center in Hampton, Virginia were analyzed to study the relationship between total-column and surface NO 2 measurements. The measurements span more than a year and cover all seasons. Surface mixing ratios are estimated via application of a planetary boundary-layer (PBL) height correction factor. This PBL correction factor effectively corrects for boundary-layer variability throughout the day, and accounts for up to ≈75 % of the variability between the NO 2 data sets. Previous studies have made monthly and seasonal comparisons of column/surface data, which has shown generally good agreement over these long average times. In the current analysis comparisons of column densities averaged over 90 s and 1 h are made. Applicability of this technique to sulfur dioxide (SO 2 ) is briefly explored. The SO 2 correlation is improved by excluding conditions where surface levels are considered background. The analysis is extended to data from the July 2011 DISCOVER-AQ mission over the greater Baltimore, MD area to examine the method's performance in more-polluted urban conditions where NO 2 concentrations are typically much higher.
Changing body temperature affects the T2* signal in the rat brain and reveals hypothalamic activity.
Vanhoutte, G; Verhoye, M; Van der Linden, A
2006-05-01
This study was designed to determine brain activity in the hypothalamus-in particular the thermoregulatory function of the hypothalamic preoptic area (PO). We experimentally changed the body temperature in rats within the physiological range (37-39 degrees C) and monitored changes in blood oxygenation level-dependent (BOLD) MR signal. To explore PO activity we had to deal with general signal changes caused by temperature-dependent alterations in the affinity of oxygen for hemoglobin, which contributes to BOLD contrast because it is partly sensitive to the amount of paramagnetic deoxyhemoglobin in the voxel. To reduce these overall temperature-induced effects, we corrected the BOLD data using brain-specific correction algorithms. The results showed activity of the PO during body warming from 38 degrees C to 39 degrees C, supported by an increased BOLD signal after correction. This is the first fMRI study on the autonomous nervous system in which hypothalamic activity elicited by changes in the internal environment (body temperature) was monitored. In this study we also demonstrate 1) that any fMRI study of anesthetized small animals should guard against background BOLD signal drift, since animals are vulnerable to body temperature fluctuations; and 2) the existence of a link between PO activity and the sympathetically-mediated opening of the arteriovenous anastomoses in a parallel study on the rat tail, a peripheral thermoregulatory organ.
Zhang, Yuzhong; Zhang, Yan
2016-07-01
In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.
Anomalous Rayleigh scattering with dilute concentrations of elements of biological importance
NASA Astrophysics Data System (ADS)
Hugtenburg, Richard P.; Bradley, David A.
2004-01-01
The anomalous scattering factor (ASF) correction to the relativistic form-factor approximation for Rayleigh scattering is examined in support of its utilization in radiographic imaging. ASF corrected total cross-section data have been generated for a low resolution grid for the Monte Carlo code EGS4 for the biologically important elements, K, Ca, Mn, Fe, Cu and Zn. Points in the fixed energy grid used by EGS4 as well as 8 other points in the vicinity of the K-edge have been chosen to achieve an uncertainty in the ASF component of 20% according to the Thomas-Reiche-Kuhn sum rule and an energy resolution of 20 eV. Such data is useful for analysis of imaging with a quasi-monoenergetic source. Corrections to the sampled distribution of outgoing photons, due to ASF, are given and new total cross-section data including that of the photoelectric effect have been computed using the Slater exchange self-consistent potential with the Latter tail. A measurement of Rayleigh scattering in a dilute aqueous solution of manganese (II) was performed, this system enabling determination of the absolute cross-section, although background subtraction was necessary to remove K β fluorescence and resonant Raman scattering occurring within several 100 eV of the edge. Measurements confirm the presence of below edge bound-bound structure and variation in the structure due to the ionic state that are not currently included in tabulations.
Lenderink, Annet F; van Dijk, Frank JH; Hulshof, Carel TJ
2012-01-01
Background Many workers have questions about occupational safety and health (OSH). It is unknown whether workers are able to find correct, evidence-based answers to OSH questions when they use common information sources, such as websites, or whether they would benefit from using an easily accessible, free-of-charge online network of OSH experts providing advice. Objective To assess the rate of correct, evidence-based answers to OSH questions in a group of workers who used an online network of OSH experts (intervention group) compared with a group of workers who used common information sources (control group). Methods In a quasi-experimental study, workers in the intervention and control groups were randomly offered 2 questions from a pool of 16 standardized OSH questions. Both questions were sent by mail to all participants, who had 3 weeks to answer them. The intervention group was instructed to use only the online network ArboAntwoord, a network of about 80 OSH experts, to solve the questions. The control group was instructed that they could use all information sources available to them. To assess answer correctness as the main study outcome, 16 standardized correct model answers were constructed with the help of reviewers who performed literature searches. Subsequently, the answers provided by all participants in the intervention (n = 94 answers) and control groups (n = 124 answers) were blinded and compared with the correct model answers on the degree of correctness. Results Of the 94 answers given by participants in the intervention group, 58 were correct (62%), compared with 24 of the 124 answers (19%) in the control group, who mainly used informational websites found via Google. The difference between the 2 groups was significant (rate difference = 43%, 95% confidence interval [CI] 30%–54%). Additional analysis showed that the rate of correct main conclusions of the answers was 85 of 94 answers (90%) in the intervention group and 75 of 124 answers (61%) in the control group (rate difference = 29%, 95% CI 19%–40%). Remarkably, we could not identify differences between workers who provided correct answers and workers who did not on how they experienced the credibility, completeness, and applicability of the information found (P > .05). Conclusions Workers are often unable to find correct answers to OSH questions when using common information sources, generally informational websites. Because workers frequently misjudge the quality of the information they find, other strategies are required to assist workers in finding correct answers. Expert advice provided through an online expert network can be effective for this purpose. As many people experience difficulties in finding correct answers to their health questions, expert networks may be an attractive new source of information for health fields in general. PMID:22356848
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-07
... [Docket No. OSHA-2006-0049] Respiratory Protection; Mechanical Power Presses; Scaffold Specifications... evaluation questionnaire in Appendix C of its Respiratory Protection standard by removing the term ``fits... INFORMATION: I. Background A. Appendix C (Mandatory) to Sec. 1910.134 (Respiratory Protection) In the...
Kulshreshtha, Bindu; Singh, Seerat; Arora, Arpita
2013-12-01
The phenotypic variability among PCOS could be due to differences in insulin patterns. Hyperinsulinemia commonly accompanies Diabetes Mellitus (DM), obesity, hypertension and CAD, though, to a variable degree. We speculate that a family history of these diseases could differentially affect the phenotype of PCOS. To study the effect of DM/CAD/HT and obesity on the phenotype of PCOS. PCOS patients and age matched controls were enquired for a family background of DM, hypertension, CAD and obesity among parents and grandparents. Regression modelling was employed to examine predictors of obesity and first symptom in PCOS patients. There were 88 PCOS women and 77 age-matched controls (46 lean, 31 obese). A high prevalence of DM, CAD, obesity and hypertension was observed among parents and grandparents of women with PCOS compared to controls. Hypertension and CAD manifested more in father's side of family. BMI of PCOS subjects was significantly related to parental DM and obesity after correcting for age. First symptom of weight gain was significantly associated with number of parents with DM (p = 0.02) and first symptom of irregular periods was associated with number of parents with hypertension (p = 0.06). A family background of DM/HT and obesity diseases affects the phenotype of PCOS.
Graph theoretical analysis of EEG functional connectivity during music perception.
Wu, Junjie; Zhang, Junsong; Liu, Chu; Liu, Dongwei; Ding, Xiaojun; Zhou, Changle
2012-11-05
The present study evaluated the effect of music on large-scale structure of functional brain networks using graph theoretical concepts. While most studies on music perception used Western music as an acoustic stimulus, Guqin music, representative of Eastern music, was selected for this experiment to increase our knowledge of music perception. Electroencephalography (EEG) was recorded from non-musician volunteers in three conditions: Guqin music, noise and silence backgrounds. Phase coherence was calculated in the alpha band and between all pairs of EEG channels to construct correlation matrices. Each resulting matrix was converted into a weighted graph using a threshold, and two network measures: the clustering coefficient and characteristic path length were calculated. Music perception was found to display a higher level mean phase coherence. Over the whole range of thresholds, the clustering coefficient was larger while listening to music, whereas the path length was smaller. Networks in music background still had a shorter characteristic path length even after the correction for differences in mean synchronization level among background conditions. This topological change indicated a more optimal structure under music perception. Thus, prominent small-world properties are confirmed in functional brain networks. Furthermore, music perception shows an increase of functional connectivity and an enhancement of small-world network organizations. Copyright © 2012 Elsevier B.V. All rights reserved.
New measurements of W-values for protons and alpha particles.
Giesen, U; Beck, J
2014-10-01
The increasing importance of ion beams in cancer therapy and the lack of experimental data for W-values for protons and heavy ions in air require new measurements. A new experimental set-up was developed at PTB and consistent measurements of W-values in argon, nitrogen and air for protons and alpha particles with energies from 0.7 to 3.5 MeV u(-1) at PTB, and for carbon ions between 3.6 and 7.0 MeV u(-1) at GSI were carried out. This publication concentrates on the measurements with protons and alpha particles at PTB. The experimental methods and the determination of corrections for recombination effects, beam-induced background radiation and additional effects are presented. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Haefele, Alexander; Payen, Guillaume; Liberti, Gianluigi
2016-08-01
A standardized approach for the definition, propagation, and reporting of uncertainty in the temperature lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One important aspect of the proposed approach is the ability to propagate all independent uncertainty components in parallel through the data processing chain. The individual uncertainty components are then combined together at the very last stage of processing to form the temperature combined standard uncertainty. The identified uncertainty sources comprise major components such as signal detection, saturation correction, background noise extraction, temperature tie-on at the top of the profile, and absorption by ozone if working in the visible spectrum, as well as other components such as molecular extinction, the acceleration of gravity, and the molecular mass of air, whose magnitudes depend on the instrument, data processing algorithm, and altitude range of interest. The expression of the individual uncertainty components and their step-by-step propagation through the temperature data processing chain are thoroughly estimated, taking into account the effect of vertical filtering and the merging of multiple channels. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which means that covariance terms must be taken into account when vertical filtering is applied and when temperature is integrated from the top of the profile. Quantitatively, the uncertainty budget is presented in a generic form (i.e., as a function of instrument performance and wavelength), so that any NDACC temperature lidar investigator can easily estimate the expected impact of individual uncertainty components in the case of their own instrument. Using this standardized approach, an example of uncertainty budget is provided for the Jet Propulsion Laboratory (JPL) lidar at Mauna Loa Observatory, Hawai'i, which is typical of the NDACC temperature lidars transmitting at 355 nm. The combined temperature uncertainty ranges between 0.1 and 1 K below 60 km, with detection noise, saturation correction, and molecular extinction correction being the three dominant sources of uncertainty. Above 60 km and up to 10 km below the top of the profile, the total uncertainty increases exponentially from 1 to 10 K due to the combined effect of random noise and temperature tie-on. In the top 10 km of the profile, the accuracy of the profile mainly depends on that of the tie-on temperature. All other uncertainty components remain below 0.1 K throughout the entire profile (15-90 km), except the background noise correction uncertainty, which peaks around 0.3-0.5 K. It should be kept in mind that these quantitative estimates may be very different for other lidar instruments, depending on their altitude range and the wavelengths used.
Contrast-detail curves in chest radiography
NASA Astrophysics Data System (ADS)
Ogden, Kent; Scalzetti, Ernest; Huda, Walter; Saluja, Jasjeet; Lavallee, Robert
2005-04-01
We investigated how size and lesion location affect detection of simulated mass lesions in chest radiography. Simulated lesions were added to the center of 10 cm x 10 cm regions of digital chest radiographs, and used in 4-Alternative Forced-Choice (4-AFC) experiments. We determined the lesion contrast required to achieve a 92% correct detection rate I(92%). The mass size was manipulated to range from 1 to 10 mm, and we investigated lesion detection in the lung apex, hilar region, and in the sub-diaphragmatic region. In these experiments, the observer obtained I(92%) from randomized repeats obtained at each of seven lesion sizes, with the results plotted as I(92%) versus lesion size. In addition we investigated the effect of using the same background in the four 4-AFC experiments (twinned) and random backgrounds from the same anatomical region taken from 20 different radiographs. In all three anatomical regions investigated, the slopes of the contrast detail curve for the random background experiments were negative for lesion sizes less than 2.5, 3.5, and 5.5 mm in the hilar (slope of -0.26), apex (slope of -0.54), and sub-diaphragmatic (slope of -0.53) regions, respectively. For lesion sizes greater than these, the slopes were 0.34, 0.23, and 0.40 in the hilar, apex, and sub-diaphragmatic regions, respectively. The positive slopes for portions of the contrast-detail curves in chest radiography are a result of the anatomical background, and show that larger lesions require more contrast for visualization.
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl
2014-01-01
Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420
NASA Technical Reports Server (NTRS)
Giroux, Mark L.; Shull, J. Michael
1997-01-01
Recent measurements of Si IV/C IV ratios in the high-redshift Ly(alpha) forest (Songaila & Cowie, AJ, 112, 335 (1996a); Savaglio et at., A&A (in press) (1997)) have opened a new window on chemical enrichment and the first generations of stars. However, the derivation of accurate Si/C abundances requires reliable ionization corrections, which are strongly dependent on the spectral shape of the metagalactic ionizing background and on the 'local effects' of hot stars in nearby galaxies. Recent models have assumed power-law quasar ionizing backgrounds plus a decrement at 4 Ryd to account for He II attenuation in intervening clouds. However, we show that realistic ionizing backgrounds based on cosmological radiative transfer models produce more complex ionizing spectra between 1-5 Ryd that are critical to interpreting ions of Si and C. We also make a preliminary investigation of the effects of He II ionization front nonoverlap. Because the attenuation and reemission by intervening clouds enhance Si IV relative to C the observed high Si IV/C IV ratios do not require an unrealistic Si overproduction (Si/C greater than or equal to 3 (Si/C)(solar mass)). If the ionizing spectrum is dominated by 'local effects' from massive stars, even larger Si IV/C IV ratios are possible. However, unless stellar radiation dominates quasars by more than a factor of 10, we confirm the evidence for some Si overproduction by massive stars; values Si/C approx. 2(Si/C)(solar mass) fit the measurements better than solar abundances. Ultimately, an adequate interpretation of the ratios of C IV, Si IV, and C II may require hot, collisionally ionized gas in a multiphase medium.
A beam hardening and dispersion correction for x-ray dark-field radiography.
Pelzer, Georg; Anton, Gisela; Horn, Florian; Rieger, Jens; Ritter, André; Wandner, Johannes; Weber, Thomas; Michel, Thilo
2016-06-01
X-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed. The experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image. Comparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased. The authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis.
Concentrating Solar Power Projects - Colorado Integrated Solar Project |
Energy's Cameo Station's Unit 2 (approximately 2 MWe equivalent) in order to decrease the overall MW Status: Currently Non-Operational Start Year: 2010 Do you have more information, corrections, or comments? Background Technology: Parabolic trough Status: Currently Non-Operational Country: United States
Concentrating Solar Power Projects - Kimberlina Solar Thermal Power Plant |
MW Gross: 5.0 MW Status: Currently Non-Operational Start Year: 2008 Do you have more information , corrections, or comments? Background Technology: Linear Fresnel reflector Status: Currently Non-Operational Manufacturer: Ausra Receiver Manufacturer : Ausra Receiver Type: Non-evacuated Receiver Length: 385 m Heat
Disconnected Lives: Women with Intellectual Disabilities in Conflict with the Law
ERIC Educational Resources Information Center
Levine, Kathryn Ann; Proulx, Jocelyn; Schwartz, Karen
2018-01-01
Background: Women with intellectual/developmental disabilities in conflict with the law experience childhood trauma, substance abuse and intimate partner violence but continue to have difficulty accessing appropriate therapeutic services, both within correctional settings and upon discharge. The aim of this study is to explore women's service…
76 FR 24855 - Initiation of Antidumping and Countervailing Duty Administrative Reviews; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-03
... Constitution Avenue, NW., Washington, DC 20230, telephone: (202) 482- 4697. Background In the Federal Register... notice in the Federal Register on March 31, 2011, concerning the initiation of administrative reviews of... than 30 calendar days after publication of this Federal Register notice. This was a typographical error...
One Output Function: A Misconception of Students Studying Digital Systems--A Case Study
ERIC Educational Resources Information Center
Trotskovsky, E.; Sabag, N.
2015-01-01
Background: Learning processes are usually characterized by students' misunderstandings and misconceptions. Engineering educators intend to help their students overcome their misconceptions and achieve correct understanding of the concept. This paper describes a misconception in digital systems held by many students who believe that combinational…
Correcting English Learner's Suprasegmental Errors
ERIC Educational Resources Information Center
Yurtbasi, Metin
2017-01-01
The main cause of pronunciation problems faced by EFL learners is their lack of a suprasegmental background. Most of those having oral comprehension and expression difficulties are unaware that their difficulty comes from their negligence of concepts of stress, pitch, juncture and linkers. While remedying stress problems, students should be taught…
Children with Autism Spectrum Disorder: Teaching Conversation Involving Feelings about Events
ERIC Educational Resources Information Center
Conallen, K.; Reed, P.
2017-01-01
Background: Two procedures were developed to teach individuals with Autism Spectrum Disorders labels (tacts) for various private events (emotions): Study 1 attempted to distinguish them from pure tacts and mands (requests); and Study 2 attempted to train initiating a conversation with grammatically correct subject-verb-comment construction.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bena, Iosif; Kraus, Per; Warner, Nicholas P.
We construct the most generic three-charge, three-dipole-charge, BPS black-ring solutions in a Taub-NUT background. These solutions depend on seven charges and six moduli, and interpolate between a four-dimensional black hole and a five-dimensional black ring. They are also instrumental in determining the correct microscopic description of the five-dimensional BPS black rings.
49 CFR 214.337 - On-track safety procedures for lone workers.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-track equipment is not impaired by background noise, lights, precipitation, fog, passing trains, or any... performing routine inspection or minor correction may use individual train detection to establish on-track... worker retains an absolute right to use on-track safety procedures other than individual train detection...
Concentrating Solar Power Projects - Qinghai Gonghe 50 MW CSP Plant |
Concentrating Solar Power | NREL Qinghai Gonghe 50 MW CSP Plant Status Date: September 26, 2016 : 50.0 Status: Under development Do you have more information, corrections, or comments? Background Technology: Power tower Status: Under development Country: China City: Gonghe Region: Qinghai Province
Concentrating Solar Power Projects - Hami 50 MW CSP Project | Concentrating
Solar Power | NREL Hami 50 MW CSP Project Status Date: April 6, 2018 Project Overview Project MW Status: Under construction Do you have more information, corrections, or comments? Background Technology: Power tower Status: Under construction Country: China City: Hami Region: Xinjiang Autonomous
Concentrating Solar Power Projects - Supcon Solar Project | Concentrating
the grid in July 2013. The second phase is currently under development. Status Date: September 26 Status: Under construction Do you have more information, corrections, or comments? Background Technology : Power tower Status: Under construction Country: China City: Delingha Region: Qinghai Lat/Long Location
Handbook for Spoken Mathematics: (Larry's Speakeasy).
ERIC Educational Resources Information Center
Chang, Lawrence A.; And Others
This handbook is directed toward those who have to deal with spoken mathematics, yet have insufficient background to know the correct verbal expression for the written symbolic one. It compiles consistent and well-defined ways of uttering mathematical expressions so listeners will receive clear, unambiguous, and well-pronounced representations.…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-25
...-0100; Amdt. Nos. 61-130A] RIN 2120-AJ67 Pilot Certification and Qualification Requirements for Air... to create new certification and qualification requirements for pilots in air carrier operations. This... . SUPPLEMENTARY INFORMATION: Background On July 15, 2013, the FAA published a final rule entitled, ``Pilot...
Anand, S; Mayya, Y S
2015-03-01
The long lived naturally occurring radon progeny species in the atmosphere, namely (210)Pb, (210)Bi and (210)Po, have been used as important tracers for understanding the atmospheric mixing processes and estimating aerosol residence times. Several observations in the past have shown that the activity size distribution of these species peaks at larger particle sizes as compared to the short lived radon progeny species - an effect that has been attributed to the process of coagulation of the background aerosols to which they are attached. To address this issue, a mathematical equation is derived for the activity-size distribution of tracer species by formulating a generalized distribution function for the number of tracer atoms present in coagulating background particles in the presence of radioactive decay and removal. A set of these equations is numerically solved for the progeny chain using Fuchs coagulation kernel combined with a realistic steady-state aerosol size spectrum that includes nucleation, accumulation and coarse mode components. The important findings are: (i) larger shifts in the modal sizes of (210)Pb and (210)Po at higher aerosol concentrations such as that found in certain Asian urban regions (ii) enrichment of tracer specific activity on particles as compared to that predicted by pure attachment laws (iii) sharp decline of daughter-to-parent activity ratios for decreasing particle sizes. The implication of the results to size-fractionated residence time estimation techniques is highlighted. A coagulation corrected graphical approach is presented for estimating the residence times from the size-segregated activity ratios of (210)Bi and (210)Po with respect to (210)Pb. The discrepancy between the residence times predicted by conventional formula and the coagulation corrected approach for specified activity ratios increases at higher atmospheric aerosol number concentrations (>10(10) #/m(3)) for smaller sizes (<1 μm). The results are further discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.