Sample records for background correction method

  1. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  3. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Enhanced identification and biological validation of differential gene expression via Illumina whole-genome expression arrays through the use of the model-based background correction methodology

    PubMed Central

    Ding, Liang-Hao; Xie, Yang; Park, Seongmi; Xiao, Guanghua; Story, Michael D.

    2008-01-01

    Despite the tremendous growth of microarray usage in scientific studies, there is a lack of standards for background correction methodologies, especially in single-color microarray platforms. Traditional background subtraction methods often generate negative signals and thus cause large amounts of data loss. Hence, some researchers prefer to avoid background corrections, which typically result in the underestimation of differential expression. Here, by utilizing nonspecific negative control features integrated into Illumina whole genome expression arrays, we have developed a method of model-based background correction for BeadArrays (MBCB). We compared the MBCB with a method adapted from the Affymetrix robust multi-array analysis algorithm and with no background subtraction, using a mouse acute myeloid leukemia (AML) dataset. We demonstrated that differential expression ratios obtained by using the MBCB had the best correlation with quantitative RT–PCR. MBCB also achieved better sensitivity in detecting differentially expressed genes with biological significance. For example, we demonstrated that the differential regulation of Tnfr2, Ikk and NF-kappaB, the death receptor pathway, in the AML samples, could only be detected by using data after MBCB implementation. We conclude that MBCB is a robust background correction method that will lead to more precise determination of gene expression and better biological interpretation of Illumina BeadArray data. PMID:18450815

  5. The beam stop array method to measure object scatter in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook

    2014-03-01

    Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.

  6. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    NASA Astrophysics Data System (ADS)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  7. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    PubMed

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  8. A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography

    PubMed Central

    Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.

    2007-01-01

    In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018

  9. GafChromic EBT film dosimetry with flatbed CCD scanner: a novel background correction method and full dose uncertainty analysis.

    PubMed

    Saur, Sigrun; Frengen, Jomar

    2008-07-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.

  10. Fourier-space combination of Planck and Herschel images

    NASA Astrophysics Data System (ADS)

    Abreu-Vicente, J.; Stutz, A.; Henning, Th.; Keto, E.; Ballesteros-Paredes, J.; Robitaille, T.

    2017-08-01

    Context. Herschel has revolutionized our ability to measure column densities (NH) and temperatures (T) of molecular clouds thanks to its far infrared multiwavelength coverage. However, the lack of a well defined background intensity level in the Herschel data limits the accuracy of the NH and T maps. Aims: We aim to provide a method that corrects the missing Herschel background intensity levels using the Planck model for foreground Galactic thermal dust emission. For the Herschel/PACS data, both the constant-offset as well as the spatial dependence of the missing background must be addressed. For the Herschel/SPIRE data, the constant-offset correction has already been applied to the archival data so we are primarily concerned with the spatial dependence, which is most important at 250 μm. Methods: We present a Fourier method that combines the publicly available Planck model on large angular scales with the Herschel images on smaller angular scales. Results: We have applied our method to two regions spanning a range of Galactic environments: Perseus and the Galactic plane region around l = 11deg (HiGal-11). We post-processed the combined dust continuum emission images to generate column density and temperature maps. We compared these to previously adopted constant-offset corrections. We find significant differences (≳20%) over significant ( 15%) areas of the maps, at low column densities (NH ≲ 1022 cm-2) and relatively high temperatures (T ≳ 20 K). We have also applied our method to synthetic observations of a simulated molecular cloud to validate our method. Conclusions: Our method successfully corrects the Herschel images, including both the constant-offset intensity level and the scale-dependent background variations measured by Planck. Our method improves the previous constant-offset corrections, which did not account for variations in the background emission levels. The image FITS files used in this paper are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/604/A65

  11. Surgical correction of pectus arcuatum

    PubMed Central

    Ershova, Ksenia; Adamyan, Ruben

    2016-01-01

    Background Pectus arcuatum is a rear congenital chest wall deformity and methods of surgical correction are debatable. Methods Surgical correction of pectus arcuatum always includes one or more horizontal sternal osteotomies, resection of deformed rib cartilages and finally anterior chest wall stabilization. The study is approved by the institutional ethical committee and has obtained the informed consent from every patient. Results In this video we show our modification of pectus arcuatum correction with only partial sternal osteotomy and further stabilization by vertical parallel titanium plates. Conclusions Reported method is a feasible option for surgical correction of pectus arcuatum. PMID:29078483

  12. Continuous Glucose Monitoring in Subjects with Type 1 Diabetes: Improvement in Accuracy by Correcting for Background Current

    PubMed Central

    Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth

    2010-01-01

    Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968

  13. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  14. Doppler distortion correction based on microphone array and matching pursuit algorithm for a wayside train bearing monitoring system

    NASA Astrophysics Data System (ADS)

    Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun

    2017-10-01

    Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.

  15. A square-wave wavelength modulation system for automatic background correction in carbon furnace atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Bezur, L.; Marshall, J.; Ottaway, J. M.

    A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.

  16. Relativistic Corrections to the Sunyaev-Zeldovich Effect for Clusters of Galaxies. III. Polarization Effect

    NASA Astrophysics Data System (ADS)

    Itoh, Naoki; Nozawa, Satoshi; Kohyama, Yasuharu

    2000-04-01

    We extend the formalism of relativistic thermal and kinematic Sunyaev-Zeldovich effects and include the polarization of the cosmic microwave background photons. We consider the situation of a cluster of galaxies moving with a velocity β≡v/c with respect to the cosmic microwave background radiation. In the present formalism, polarization of the scattered cosmic microwave background radiation caused by the proper motion of a cluster of galaxies is naturally derived as a special case of the kinematic Sunyaev-Zeldovich effect. The relativistic corrections are also included in a natural way. Our results are in complete agreement with the recent results of relativistic corrections obtained by Challinor, Ford, & Lasenby with an entirely different method, as well as the nonrelativistic limit obtained by Sunyaev & Zeldovich. The relativistic correction becomes significant in the Wien region.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saur, Sigrun; Frengen, Jomar; Department of Oncology and Radiotherapy, St. Olavs University Hospital, N-7006 Trondheim

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scansmore » of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16x16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.« less

  18. Background correction in forensic photography. II. Photography of blood under conditions of non-uniform illumination or variable substrate color--practical aspects and limitations.

    PubMed

    Wagner, John H; Miskelly, Gordon M

    2003-05-01

    The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro

    We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.

  20. Summing coincidence correction for γ-ray measurements using the HPGe detector with a low background shielding system

    NASA Astrophysics Data System (ADS)

    He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.

    2018-02-01

    A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.

  1. The location and recognition of anti-counterfeiting code image with complex background

    NASA Astrophysics Data System (ADS)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  2. Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs

    PubMed Central

    Esmonde-White, Francis W. L.; Esmonde-White, Karen A.; Morris, Michael D.

    2010-01-01

    Projective transformation is a mathematical correction (implemented in software) used in the remote imaging field to produce distortion-free images. We present the application of projective transformation to correct minor alignment and astigmatism distortions that are inherent in dispersive spectrographs. Patterned white-light images and neon emission spectra were used to produce registration points for the transformation. Raman transects collected on microscopy and fiber-optic systems were corrected using established methods and compared with the same transects corrected using the projective transformation. Even minor distortions have a significant effect on reproducibility and apparent fluorescence background complexity. Simulated Raman spectra were used to optimize the projective transformation algorithm. We demonstrate that the projective transformation reduced the apparent fluorescent background complexity and improved reproducibility of measured parameters of Raman spectra. Distortion correction using a projective transformation provides a major advantage in reducing the background fluorescence complexity even in instrumentation where slit-image distortions and camera rotation were minimized using manual or mechanical means. We expect these advantages should be readily applicable to other spectroscopic modalities using dispersive imaging spectrographs. PMID:21211158

  3. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  4. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement.

    PubMed

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.

  5. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra.

    PubMed

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen

    2017-07-27

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.

  6. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra

    PubMed Central

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen

    2017-01-01

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450

  7. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  8. Determination of serum aluminum by electrothermal atomic absorption spectrometry: A comparison between Zeeman and continuum background correction systems

    NASA Astrophysics Data System (ADS)

    Kruger, Pamela C.; Parsons, Patrick J.

    2007-03-01

    Excessive exposure to aluminum (Al) can produce serious health consequences in people with impaired renal function, especially those undergoing hemodialysis. Al can accumulate in the brain and in bone, causing dialysis-related encephalopathy and renal osteodystrophy. Thus, dialysis patients are routinely monitored for Al overload, through measurement of their serum Al. Electrothermal atomic absorption spectrometry (ETAAS) is widely used for serum Al determination. Here, we assess the analytical performances of three ETAAS instruments, equipped with different background correction systems and heating arrangements, for the determination of serum Al. Specifically, we compare (1) a Perkin Elmer (PE) Model 3110 AAS, equipped with a longitudinally (end) heated graphite atomizer (HGA) and continuum-source (deuterium) background correction, with (2) a PE Model 4100ZL AAS equipped with a transversely heated graphite atomizer (THGA) and longitudinal Zeeman background correction, and (3) a PE Model Z5100 AAS equipped with a HGA and transverse Zeeman background correction. We were able to transfer the method for serum Al previously established for the Z5100 and 4100ZL instruments to the 3110, with only minor modifications. As with the Zeeman instruments, matrix-matched calibration was not required for the 3110 and, thus, aqueous calibration standards were used. However, the 309.3-nm line was chosen for analysis on the 3110 due to failure of the continuum background correction system at the 396.2-nm line. A small, seemingly insignificant overcorrection error was observed in the background channel on the 3110 instrument at the 309.3-nm line. On the 4100ZL, signal oscillation was observed in the atomization profile. The sensitivity, or characteristic mass ( m0), for Al at the 309.3-nm line on the 3110 AAS was found to be 12.1 ± 0.6 pg, compared to 16.1 ± 0.7 pg for the Z5100, and 23.3 ± 1.3 pg for the 4100ZL at the 396.2-nm line. However, the instrumental detection limits (3 SD) for Al were very similar: 3.0, 3.2, and 4.1 μg L - 1 for the Z5100, 4100ZL, and 3110, respectively. Serum Al method detection limits (3 SD) were 9.8, 6.9, and 7.3 μg L - 1 , respectively. Accuracy was assessed using archived serum (and plasma) reference materials from various external quality assessment schemes (EQAS). Values found with all three instruments were within the acceptable EQAS ranges. The data indicate that relatively modest ETAAS instrumentation equipped with continuum background correction is adequate for routine serum Al monitoring.

  9. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  10. A Comparative Study on Preprocessing Techniques in Diabetic Retinopathy Retinal Images: Illumination Correction and Contrast Enhancement

    PubMed Central

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940

  11. PET attenuation correction for rigid MR Tx/Rx coils from 176Lu background activity

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Kaltsas, Theodoris; Caldeira, Liliana; Scheins, Jürgen; Rota Kops, Elena; Tellmann, Lutz; Pietrzyk, Uwe; Herzog, Hans; Shah, N. Jon

    2018-02-01

    One challenge for PET-MR hybrid imaging is the correction for attenuation of the 511 keV annihilation radiation by the required RF transmit and/or RF receive coils. Although there are strategies for building PET transparent Tx/Rx coils, such optimised coils still cause significant attenuation of the annihilation radiation leading to artefacts and biases in the reconstructed activity concentrations. We present a straightforward method to measure the attenuation of Tx/Rx coils in simultaneous MR-PET imaging based on the natural 176Lu background contained in the scintillator of the PET detector without the requirement of an external CT scanner or PET scanner with transmission source. The method was evaluated on a prototype 3T MR-BrainPET produced by Siemens Healthcare GmbH, both with phantom studies and with true emission images from patient/volunteer examinations. Furthermore, the count rate stability of the PET scanner and the x-ray properties of the Tx/Rx head coil were investigated. Even without energy extrapolation from the two dominant γ energies of 176Lu to 511 keV, the presented method for attenuation correction, based on the measurement of 176Lu background attenuation, shows slightly better performance than the coil attenuation correction currently used. The coil attenuation correction currently used is based on an external transmission scan with rotating 68Ge sources acquired on a Siemens ECAT HR  +  PET scanner. However, the main advantage of the presented approach is its straightforwardness and ready availability without the need for additional accessories.

  12. Environmental corrections of a dual-induction logging while drilling tool in vertical wells

    NASA Astrophysics Data System (ADS)

    Kang, Zhengming; Ke, Shizhen; Jiang, Ming; Yin, Chengfang; Li, Anzong; Li, Junjian

    2018-04-01

    With the development of Logging While Drilling (LWD) technology, dual-induction LWD logging is not only widely applied in deviated wells and horizontal wells, but it is used commonly in vertical wells. Accordingly, it is necessary to simulate the response of LWD tools in vertical wells for logging interpretation. In this paper, the investigation characteristics, the effects of the tool structure, skin effect and drilling environment of a dual-induction LWD tool are simulated by the three-dimensional (3D) finite element method (FEM). In order to closely simulate the actual situation, real structure of the tool is taking into account. The results demonstrate that the influence of the background value of the tool structure can be eliminated. The values of deducting the background of a tool structure and analytical solution have a quantitative agreement in homogeneous formations. The effect of measurement frequency could be effectively eliminated by chart of skin effect correction. In addition, the measurement environment, borehole size, mud resistivity, shoulder bed, layer thickness and invasion, have an effect on the true resistivity. To eliminate these effects, borehole correction charts, shoulder bed correction charts and tornado charts are computed based on real tool structure. Based on correction charts, well logging data can be corrected automatically by a suitable interpolation method, which is convenient and fast. Verified with actual logging data in vertical wells, this method could obtain the true resistivity of formation.

  13. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    PubMed

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Revised Radiometric Calibration Technique for LANDSAT-4 Thematic Mapper Data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.

  15. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

    PubMed Central

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-01-01

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970

  16. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications.

    PubMed

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-12-27

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

  17. Implication of the first decision on visual information-sampling in the spatial frequency domain in pulmonary nodule recognition

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Manning, David; Donovan, Tim; Dix, Alan

    2010-02-01

    Aim: To investigate the impact on visual sampling strategy and pulmonary nodule recognition of image-based properties of background locations in dwelled regions where the first overt decision was made. . Background: Recent studies in mammography show that the first overt decision (TP or FP) has an influence on further image reading including the correctness of the following decisions. Furthermore, the correlation between the spatial frequency properties of the local background following decision sites and the first decision correctness has been reported. Methods: Subjects with different radiological experience were eye tracked during detection of pulmonary nodules from PA chest radiographs. Number of outcomes and the overall quality of performance are analysed in terms of the cases where correct or incorrect decisions were made. JAFROC methodology is applied. The spatial frequency properties of selected local backgrounds related to a certain decisions were studied. ANOVA was used to compare the logarithmic values of energy carried by non redundant stationary wavelet packet coefficients. Results: A strong correlation has been found between the number of TP as a first decision and the JAFROC score (r = 0.74). The number of FP as a first decision was found negatively correlated with JAFROC (r = -0.75). Moreover, the differential spatial frequency profiles outcomes depend on the first choice correctness.

  18. Practitioner Review: Use of Antiepileptic Drugs in Children

    ERIC Educational Resources Information Center

    Guerrini, Renzo; Parmeggiani, Lucio

    2006-01-01

    Background: The aim in treating epilepsy is to minimise or control seizures with full respect of quality-of-life issues, especially of cognitive functions. Optimal treatment first demands a correct recognition of the major type of seizures, followed by a correct diagnosis of the type of epilepsy or of the specific syndrome. Methods: Review of data…

  19. [Raman spectroscopy fluorescence background correction and its application in clustering analysis of medicines].

    PubMed

    Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei

    2010-08-01

    During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.

  20. Vertebral derotation in adolescent idiopathic scoliosis causes hypokyphosis of the thoracic spine

    PubMed Central

    2012-01-01

    Background The purpose of this study was to test the hypothesis that direct vertebral derotation by pedicle screws (PS) causes hypokyphosis of the thoracic spine in adolescent idiopathic scoliosis (AIS) patients, using computer simulation. Methods Twenty AIS patients with Lenke type 1 or 2 who underwent posterior correction surgeries using PS were included in this study. Simulated corrections of each patient’s scoliosis, as determined by the preoperative CT scan data, were performed on segmented 3D models of the whole spine. Two types of simulated extreme correction were performed: 1) complete coronal correction only (C method) and 2) complete coronal correction with complete derotation of vertebral bodies (C + D method). The kyphosis angle (T5-T12) and vertebral rotation angle at the apex were measured before and after the simulated corrections. Results The mean kyphosis angle after the C + D method was significantly smaller than that after the C method (2.7 ± 10.0° vs. 15.0 ± 7.1°, p < 0.01). The mean preoperative apical rotation angle of 15.2 ± 5.5° was completely corrected after the C + D method (0°) and was unchanged after the C method (17.6 ± 4.2°). Conclusions In the 3D simulation study, kyphosis was reduced after complete correction of the coronal and rotational deformity, but it was maintained after the coronal-only correction. These results proved the hypothesis that the vertebral derotation obtained by PS causes hypokyphosis of the thoracic spine. PMID:22691717

  1. Novel Principles and Techniques to Create a Natural Design in Female Hairline Correction Surgery

    PubMed Central

    2015-01-01

    Abstract Background: Female hairline correction surgery is becoming increasingly popular. However, no guidelines or methods of female hairline design have been introduced to date. Methods: The purpose of this study was to create an initial framework based on the novel principles of female hairline design and then use artistic ability and experience to fine tune this framework. An understanding of the concept of 5 areas (frontal area, frontotemporal recess area, temporal peak, infratemple area, and sideburns) and 5 points (C, A, B, T, and S) is required for female hairline correction surgery (the 5A5P principle). The general concepts of female hairline correction surgery and natural design methods are, herein, explained with a focus on the correlations between these 5 areas and 5 points. Results: A natural and aesthetic female hairline can be created with application of the above-mentioned concepts. Conclusion: The 5A5P principle of forming the female hairline is very useful in female hairline correction surgery. PMID:26894014

  2. Kepler Planet Detection Metrics: Automatic Detection of Background Objects Using the Centroid Robovetter

    NASA Technical Reports Server (NTRS)

    Mullally, Fergal

    2017-01-01

    We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Elsayed

    Purpose: To characterize and correct for radiation-induced background (RIB) observed in the signals from a class of scanning water tanks. Methods: A method was developed to isolate the RIB through detector measurements in the background-free linac console area. Variation of the RIB against a large number of parameters was characterized, and its impact on basic clinical data for photon and electron beams was quantified. Different methods to minimize and/or correct for the RIB were proposed and evaluated. Results: The RIB is due to the presence of the electrometer and connection box in a low background radiation field (by design). Themore » absolute RIB current with a biased detector is up to 2 pA, independent of the detector size, which is 0.6% and 1.5% of the central axis reference signal for a standard and a mini scanning chamber, respectively. The RIB monotonically increases with field size, is three times smaller for detectors that do not require a bias (e.g., diodes), is up to 80% larger for positive (versus negative) polarity, decreases with increasing photon energy, exhibits a single curve versus dose rate at the electrometer location, and is negligible for electron beams. Data after the proposed field-size correction method agree with point measurements from an independent system to within a few tenth of a percent for output factor, head scatter, depth dose at depth, and out-of-field profile dose. Manufacturer recommendations for electrometer placement are insufficient and sometimes incorrect. Conclusions: RIB in scanning water tanks can have a non-negligible effect on dosimetric data.« less

  4. Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Singer, J.; Armstrong, J. T.

    2016-12-01

    Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.

  5. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  6. 40 CFR 1065.650 - Emission calculations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... following sequence of preliminary calculations on recorded concentrations: (i) Correct all THC and CH4.... (iii) Calculate all THC and NMHC concentrations, including dilution air background concentrations, as... NMHC to background corrected mass of THC. If the background corrected mass of NMHC is greater than 0.98...

  7. Lutetium oxyorthosilicate (LSO) intrinsic activity correction and minimal detectable target activity study for SPECT imaging with a LSO-based animal PET scanner

    NASA Astrophysics Data System (ADS)

    Yao, Rutao; Ma, Tianyu; Shao, Yiping

    2008-08-01

    This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.

  8. An empirical method to correct for temperature-dependent variations in the overlap function of CHM15k ceilometers

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Poltera, Yann; Haefele, Alexander

    2016-07-01

    Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.

  9. Accuracy of Rhenium-188 SPECT/CT activity quantification for applications in radionuclide therapy using clinical reconstruction methods.

    PubMed

    Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna

    2017-07-20

    The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors  <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors  >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.

  10. Redrawing the US Obesity Landscape: Bias-Corrected Estimates of State-Specific Adult Obesity Prevalence

    PubMed Central

    Ward, Zachary J.; Long, Michael W.; Resch, Stephen C.; Gortmaker, Steven L.; Cradock, Angie L.; Giles, Catherine; Hsiao, Amber; Wang, Y. Claire

    2016-01-01

    Background State-level estimates from the Centers for Disease Control and Prevention (CDC) underestimate the obesity epidemic because they use self-reported height and weight. We describe a novel bias-correction method and produce corrected state-level estimates of obesity and severe obesity. Methods Using non-parametric statistical matching, we adjusted self-reported data from the Behavioral Risk Factor Surveillance System (BRFSS) 2013 (n = 386,795) using measured data from the National Health and Nutrition Examination Survey (NHANES) (n = 16,924). We validated our national estimates against NHANES and estimated bias-corrected state-specific prevalence of obesity (BMI≥30) and severe obesity (BMI≥35). We compared these results with previous adjustment methods. Results Compared to NHANES, self-reported BRFSS data underestimated national prevalence of obesity by 16% (28.67% vs 34.01%), and severe obesity by 23% (11.03% vs 14.26%). Our method was not significantly different from NHANES for obesity or severe obesity, while previous methods underestimated both. Only four states had a corrected obesity prevalence below 30%, with four exceeding 40%–in contrast, most states were below 30% in CDC maps. Conclusions Twelve million adults with obesity (including 6.7 million with severe obesity) were misclassified by CDC state-level estimates. Previous bias-correction methods also resulted in underestimates. Accurate state-level estimates are necessary to plan for resources to address the obesity epidemic. PMID:26954566

  11. Drug exposure in register-based research—An expert-opinion based evaluation of methods

    PubMed Central

    Taipale, Heidi; Koponen, Marjaana; Tolppanen, Anna-Maija; Hartikainen, Sirpa; Ahonen, Riitta; Tiihonen, Jari

    2017-01-01

    Background In register-based pharmacoepidemiological studies, construction of drug exposure periods from drug purchases is a major methodological challenge. Various methods have been applied but their validity is rarely evaluated. Our objective was to conduct an expert-opinion based evaluation of the correctness of drug use periods produced by different methods. Methods Drug use periods were calculated with three fixed methods: time windows, assumption of one Defined Daily Dose (DDD) per day and one tablet per day, and with PRE2DUP that is based on modelling of individual drug purchasing behavior. Expert-opinion based evaluation was conducted with 200 randomly selected purchase histories of warfarin, bisoprolol, simvastatin, risperidone and mirtazapine in the MEDALZ-2005 cohort (28,093 persons with Alzheimer’s disease). Two experts reviewed purchase histories and judged which methods had joined correct purchases and gave correct duration for each of 1000 drug exposure periods. Results The evaluated correctness of drug use periods was 70–94% for PRE2DUP, and depending on grace periods and time window lengths 0–73% for tablet methods, 0–41% for DDD methods and 0–11% for time window methods. The highest rate of evaluated correct solutions for each method class were observed for 1 tablet per day with 180 days grace period (TAB_1_180, 43–73%), and 1 DDD per day with 180 days grace period (1–41%). Time window methods produced at maximum only 11% correct solutions. The best performing fixed method TAB_1_180 reached highest correctness for simvastatin 73% (95% CI 65–81%) whereas 89% (95% CI 84–94%) of PRE2DUP periods were judged as correct. Conclusions This study shows inaccuracy of fixed methods and the urgent need for new data-driven methods. In the expert-opinion based evaluation, the lowest error rates were observed with data-driven method PRE2DUP. PMID:28886089

  12. Laser line illumination scheme allowing the reduction of background signal and the correction of absorption heterogeneities effects for fluorescence reflectance imaging.

    PubMed

    Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc

    2015-10-01

    Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality as it allows to noninvasively monitor the fluorescence targeted tumors located below the tissue surface. Some drawbacks of this technique are the background fluorescence decreasing the contrast and absorption heterogeneities leading to misinterpretations concerning fluorescence concentrations. We propose a correction technique based on a laser line scanning illumination scheme. We scan the medium with the laser line and acquire, at each position of the line, both fluorescence and excitation images. We then use the finding that there is a relationship between the excitation intensity profile and the background fluorescence one to predict the amount of signal to subtract from the fluorescence images to get a better contrast. As the light absorption information is contained both in fluorescence and excitation images, this method also permits us to correct the effects of absorption heterogeneities. This technique has been validated on simulations and experimentally. Fluorescent inclusions are observed in several configurations at depths ranging from 1 mm to 1 cm. Results obtained with this technique are compared with those obtained with a classical wide-field detection scheme for contrast enhancement and with the fluorescence by an excitation ratio approach for absorption correction.

  13. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    PubMed

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Alternative method for determining the constant offset in lidar signal

    Treesearch

    Vladimir A. Kovalev; Cyle Wold; Alexander Petkov; Wei Min Hao

    2009-01-01

    We present an alternative method for determining the total offset in lidar signal created by a daytime background-illumination component and electrical or digital offset. Unlike existing techniques, here the signal square-range-correction procedure is initially performed using the total signal recorded by lidar, without subtraction of the offset component. While...

  15. Compensating for magnetic field inhomogeneity in multigradient-echo-based MR thermometry.

    PubMed

    Simonis, Frank F J; Petersen, Esben T; Bartels, Lambertus W; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2015-03-01

    MR thermometry (MRT) is a noninvasive method for measuring temperature that can potentially be used for radio frequency (RF) safety monitoring. This application requires measuring absolute temperature. In this study, a multigradient-echo (mGE) MRT sequence was used for that purpose. A drawback of this sequence, however, is that its accuracy is affected by background gradients. In this article, we present a method to minimize this effect and to improve absolute temperature measurements using MRI. By determining background gradients using a B0 map or by combining data acquired with two opposing readout directions, the error can be removed in a homogenous phantom, thus improving temperature maps. All scans were performed on a 3T system using ethylene glycol-filled phantoms. Background gradients were varied, and one phantom was uniformly heated to validate both compensation approaches. Independent temperature recordings were made with optical probes. Errors correlated closely to the background gradients in all experiments. Temperature distributions showed a much smaller standard deviation when the corrections were applied (0.21°C vs. 0.45°C) and correlated well with thermo-optical probes. The corrections offer the possibility to measure RF heating in phantoms more precisely. This allows mGE MRT to become a valuable tool in RF safety assessment. © 2014 Wiley Periodicals, Inc.

  16. Automatic correction of dental artifacts in PET/MRI

    PubMed Central

    Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune. H.; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois

    2015-01-01

    Abstract. A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 F18-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of 97±3% of the artifact areas. PMID:26158104

  17. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Dilution air background emission...

  18. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Dilution air background emission...

  19. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Dilution air background emission...

  20. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Dilution air background emission...

  1. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Dilution air background emission...

  2. Optimization of yttrium-90 PET for simultaneous PET/MR imaging: A phantom study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldib, Mootaz

    2016-08-15

    Purpose: Positron emission tomography (PET) imaging of yttrium-90 in the liver post radioembolization has been shown useful for personalized dosimetry calculations and evaluation of extrahepatic deposition. The purpose of this study was to quantify the benefits of several MR-based data correction approaches offered by using a combined PET/MR system to improve Y-90 PET imaging. In particular, the feasibility of motion and partial volume corrections were investigated in a controlled phantom study. Methods: The ACR phantom was filled with an initial concentration of 8 GBq of Y-90 solution resulting in a contrast of 10:1 between the hot cylinders and the background.more » Y-90 PET motion correction through motion estimates from MR navigators was evaluated by using a custom-built motion stage that simulated realistic amplitudes of respiration-induced liver motion. Finally, the feasibility of an MR-based partial volume correction method was evaluated using a wavelet decomposition approach. Results: Motion resulted in a large (∼40%) loss of contrast recovery for the 8 mm cylinder in the phantom, but was corrected for after MR-based motion correction was applied. Partial volume correction improved contrast recovery by 13% for the 8 mm cylinder. Conclusions: MR-based data correction improves Y-90 PET imaging on simultaneous PET/MR systems. Assessment of these methods must be studied further in the clinical setting.« less

  3. An improved method to detect correct protein folds using partial clustering

    PubMed Central

    2013-01-01

    Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835

  4. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy

    PubMed Central

    Schaufele, Fred

    2013-01-01

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  5. Quantitation of tumor uptake with molecular breast imaging.

    PubMed

    Bache, Steven T; Kappadath, S Cheenu

    2017-09-01

    We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.

  6. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  7. The minimizing of fluorescence background in Raman optical activity and Raman spectra of human blood plasma.

    PubMed

    Tatarkovič, Michal; Synytsya, Alla; Šťovíčková, Lucie; Bunganič, Bohuš; Miškovičová, Michaela; Petruželka, Luboš; Setnička, Vladimír

    2015-02-01

    Raman optical activity (ROA) is inherently sensitive to the secondary structure of biomolecules, which makes it a method of interest for finding new approaches to clinical applications based on blood plasma analysis, for instance the diagnostics of several protein-misfolding diseases. Unfortunately, real blood plasma exhibits strong background fluorescence when excited at 532 nm; hence, measuring the ROA spectra appears to be impossible. Therefore, we established a suitable method using a combination of kinetic quenchers, filtering, photobleaching, and a mathematical correction of residual fluorescence. Our method reduced the background fluorescence approximately by 90%, which allowed speedup for each measurement by an average of 50%. In addition, the signal-to-noise ratio was significantly increased, while the baseline distortion remained low. We assume that our method is suitable for the investigation of human blood plasma by ROA and may lead to the development of a new tool for clinical diagnostics.

  8. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    NASA Astrophysics Data System (ADS)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.

  9. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  10. Radiative improvement of the lattice nonrelativistic QCD action using the background field method and application to the hyperfine splitting of quarkonium states.

    PubMed

    Hammant, T C; Hart, A G; von Hippel, G M; Horgan, R R; Monahan, C J

    2011-09-09

    We present the first application of the background field method to nonrelativistic QCD (NRQCD) on the lattice in order to determine the one-loop radiative corrections to the coefficients of the NRQCD action in a manifestly gauge-covariant manner. The coefficients of the σ·B term in the NRQCD action and the four-fermion spin-spin interaction are computed at the one-loop level; the resulting shift of the hyperfine splitting of bottomonium is found to bring the lattice predictions in line with experiment.

  11. Neyman Pearson detection of K-distributed random variables

    NASA Astrophysics Data System (ADS)

    Tucker, J. Derek; Azimi-Sadjadi, Mahmood R.

    2010-04-01

    In this paper a new detection method for sonar imagery is developed in K-distributed background clutter. The equation for the log-likelihood is derived and compared to the corresponding counterparts derived for the Gaussian and Rayleigh assumptions. Test results of the proposed method on a data set of synthetic underwater sonar images is also presented. This database contains images with targets of different shapes inserted into backgrounds generated using a correlated K-distributed model. Results illustrating the effectiveness of the K-distributed detector are presented in terms of probability of detection, false alarm, and correct classification rates for various bottom clutter scenarios.

  12. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE PAGES

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...

    2016-03-03

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  13. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  14. Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging.

    PubMed

    Carasso, Alfred S; Vladár, András E

    2014-01-01

    This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by 'slow motion' low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected 'fast scan' frames. The paper includes software routines, written in Interactive Data Language (IDL),(1) that can perform the above image processing tasks.

  15. Background oriented schlieren in a density stratified fluid.

    PubMed

    Verso, Lilly; Liberzon, Alex

    2015-10-01

    Non-intrusive quantitative fluid density measurement methods are essential in the stratified flow experiments. Digital imaging leads to synthetic schlieren methods in which the variations of the index of refraction are reconstructed computationally. In this study, an extension to one of these methods, called background oriented schlieren, is proposed. The extension enables an accurate reconstruction of the density field in stratified liquid experiments. Typically, the experiments are performed by the light source, background pattern, and the camera positioned on the opposite sides of a transparent vessel. The multimedia imaging through air-glass-water-glass-air leads to an additional aberration that destroys the reconstruction. A two-step calibration and image remapping transform are the key components that correct the images through the stratified media and provide a non-intrusive full-field density measurements of transparent liquids.

  16. Correcting geometric and photometric distortion of document images on a smartphone

    NASA Astrophysics Data System (ADS)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  17. A beam hardening and dispersion correction for x-ray dark-field radiography.

    PubMed

    Pelzer, Georg; Anton, Gisela; Horn, Florian; Rieger, Jens; Ritter, André; Wandner, Johannes; Weber, Thomas; Michel, Thilo

    2016-06-01

    X-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed. The experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image. Comparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased. The authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis.

  18. Optimization of cDNA microarrays procedures using criteria that do not rely on external standards

    PubMed Central

    Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Lægreid, Astrid

    2007-01-01

    Background The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. Results We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. Conclusion The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish. PMID:17949480

  19. Determination of 18 kinds of trace impurities in the vanadium battery grade vanadyl sulfate by ICP-OES

    NASA Astrophysics Data System (ADS)

    Yong, Cheng

    2018-03-01

    The method that direct determination of 18 kinds of trace impurities in the vanadium battery grade vanadyl sulfate by inductively coupled plasma atomic emission spectrometry (ICP-OES) was established, and the detection range includes 0.001% ∼ 0.100% of Fe, Cr, Ni, Cu, Mn, Mo, Pb, As, Co, P, Ti, Zn and 0.005% ∼ 0.100% of K, Na, Ca, Mg, Si, Al. That the influence of the matrix effects, spectral interferences and background continuum superposition in the high concentrations of vanadium ions and sulfate coexistence system had been studied, and then the following conclusions were obtained: the sulfate at this concentration had no effect on the determination, but the matrix effects or continuous background superposition which were generated by high concentration of vanadium ions had negative interference on the determination of potassium and sodium, and it produced a positive interference on the determination of the iron and other impurity elements, so that the impacts of high vanadium matrix were eliminated by the matrix matching and combining synchronous background correction measures. Through the spectral interference test, the paper classification summarized the spectral interferences of vanadium matrix and between the impurity elements, and the analytical lines, the background correction regions and working parameters of the spectrometer were all optimized. The technical performance index of the analysis method is that the background equivalent concentration -0.0003%(Na)~0.0004%(Cu), the detection limit of the element is 0.0001%∼ 0.0003%, RSD<10% when the element content is in the range from 0.001% to 0.007%, RSD< 20% even if the element content is in the range from 0.0001% to 0.001% that is beyond the scope of the method of detection, recoveries is 91.0% ∼ 110.0%.

  20. The effect of a scanning flat fold mirror on a cosmic microwave background B-mode experiment.

    PubMed

    Grainger, William F; North, Chris E; Ade, Peter A R

    2011-06-01

    We investigate the possibility of using a flat-fold beam steering mirror for a cosmic microwave background B-mode experiment. An aluminium flat-fold mirror is found to add ∼0.075% polarization, which varies in a scan synchronous way. Time-domain simulations of a realistic scanning pattern are performed, and the effect on the power-spectrum illustrated, and a possible method of correction applied. © 2011 American Institute of Physics

  1. Techniques for the correction of topographical effects in scanning Auger electron microscopy

    NASA Technical Reports Server (NTRS)

    Prutton, M.; Larson, L. A.; Poppa, H.

    1983-01-01

    A number of ratioing methods for correcting Auger images and linescans for topographical contrast are tested using anisotropically etched silicon substrates covered with Au or Ag. Thirteen well-defined angles of incidence are present on each polyhedron produced on the Si by this etching. If N1 electrons are counted at the energy of an Auger peak and N2 are counted in the background above the peak, then N1, N1 - N2, (N1 - N2)/(N1 + N2) are measured and compared as methods of eliminating topographical contrast. The latter method gives the best compensation but can be further improved by using a measurement of the sample absorption current. Various other improvements are discussed.

  2. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-09

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Fluorescence background removal method for biological Raman spectroscopy based on empirical mode decomposition.

    PubMed

    Leon-Bejarano, Maritza; Dorantes-Mendez, Guadalupe; Ramirez-Elias, Miguel; Mendez, Martin O; Alba, Alfonso; Rodriguez-Leyva, Ildefonso; Jimenez, M

    2016-08-01

    Raman spectroscopy of biological tissue presents fluorescence background, an undesirable effect that generates false Raman intensities. This paper proposes the application of the Empirical Mode Decomposition (EMD) method to baseline correction. EMD is a suitable approach since it is an adaptive signal processing method for nonlinear and non-stationary signal analysis that does not require parameters selection such as polynomial methods. EMD performance was assessed through synthetic Raman spectra with different signal to noise ratio (SNR). The correlation coefficient between synthetic Raman spectra and the recovered one after EMD denoising was higher than 0.92. Additionally, twenty Raman spectra from skin were used to evaluate EMD performance and the results were compared with Vancouver Raman algorithm (VRA). The comparison resulted in a mean square error (MSE) of 0.001554. High correlation coefficient using synthetic spectra and low MSE in the comparison between EMD and VRA suggest that EMD could be an effective method to remove fluorescence background in biological Raman spectra.

  4. Effect of clothing weight on body weight

    USDA-ARS?s Scientific Manuscript database

    Background: In clinical settings, it is common to measure weight of clothed patients and estimate a correction for the weight of clothing, but we can find no papers in the medical literature regarding the variability in clothing weight with weather, season, and gender. Methods: Fifty adults (35 wom...

  5. "Hook"-calibration of GeneChip-microarrays: theory and algorithm.

    PubMed

    Binder, Hans; Preibisch, Stephan

    2008-08-29

    : The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. : We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. : The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.

  6. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  7. Interactive QR code beautification with full background image embedding

    NASA Astrophysics Data System (ADS)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  8. Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging

    PubMed Central

    Carasso, Alfred S; Vladár, András E

    2014-01-01

    This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by ‘slow motion’ low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected ‘fast scan’ frames. The paper includes software routines, written in Interactive Data Language (IDL),1 that can perform the above image processing tasks. PMID:26601050

  9. The importance of atmospheric correction for airborne hyperspectral remote sensing of shallow waters: application to depth estimation

    NASA Astrophysics Data System (ADS)

    Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe

    2017-10-01

    Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.

  10. [Evaluation of Sugar Content of Huanghua Pear on Trees by Visible/Near Infrared Spectroscopy].

    PubMed

    Liu, Hui-jun; Ying, Yi-bin

    2015-11-01

    A method of ambient light correction was proposed to evaluate the sugar content of Huanghua pears on tree by visible/near infrared diffuse reflectance spectroscopy (Vis/NIRS). Due to strong interference of ambient light, it was difficult to collect the efficient spectral of pears on tree. In the field, covering the fruits with a bag blocking ambient light can get better results, but the efficiency is fairly low, the instrument corrections of dark and reference spectra may help to reduce the error of the model, however, the interference of the ambient light cannot be eliminated effectively. In order to reduce the effect of ambient light, a shutter was attached to the front of probe. When opening shutter, the spot spectrum were obtained, on which instrument light and ambient light acted at the same time. While closing shutter, background spectra were obtained, on which only ambient light acted, then the ambient light spectra was subtracted from spot spectra. Prediction models were built using data on tree (before and after ambient light correction) and after harvesting by partial least square (PLS). The results of the correlation coefficient (R) are 0.1, 0.69, 0.924; the root mean square error of prediction (SEP) are 0. 89°Brix, 0.42°Brix, 0.27°Brix; ratio of standard deviation (SD) to SEP (RPD) are 0.79, 1.69, 2.58, respectively. The results indicate that, method of background correction used in the experiment can reduce the effect of ambient lighting on spectral acquisition of Huanghua pears in field, efficiently. This method can be used to collect the visible/near infrared spectrum of fruits in field, and may give full play to visible/near-infrared spectroscopy in preharvest management and maturity testing of fruits in the field.

  11. Correlates of Condom Use among Male High School Students in Nairobi, Kenya

    ERIC Educational Resources Information Center

    Kabiru, Caroline W.; Orpinas, Pamela

    2009-01-01

    Background: Correct and consistent condom use is an effective strategy to reduce the risk of sexually transmitted infections (STIs). This study examines sociodemographic, behavioral, and psychosocial characteristics of 3 groups of adolescent males: consistent, sporadic, and non-condom users. Methods: The sample consisted of 931 sexually…

  12. Over-fitting Time Series Models of Air Pollution Health Effects: Smoothing Tends to Bias Non-Null Associations Towards the Null.

    EPA Science Inventory

    Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...

  13. Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.

    PubMed

    Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi

    2017-06-08

    A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.

  14. Image Processing of Porous Silicon Microarray in Refractive Index Change Detection

    PubMed Central

    Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi

    2017-01-01

    A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383

  15. High resolution melting analysis: rapid and precise characterisation of recombinant influenza A genomes

    PubMed Central

    2013-01-01

    Background High resolution melting analysis (HRM) is a rapid and cost-effective technique for the characterisation of PCR amplicons. Because the reverse genetics of segmented influenza A viruses allows the generation of numerous influenza A virus reassortants within a short time, methods for the rapid selection of the correct recombinants are very useful. Methods PCR primer pairs covering the single nucleotide polymorphism (SNP) positions of two different influenza A H5N1 strains were designed. Reassortants of the two different H5N1 isolates were used as a model to prove the suitability of HRM for the selection of the correct recombinants. Furthermore, two different cycler instruments were compared. Results Both cycler instruments generated comparable average melting peaks, which allowed the easy identification and selection of the correct cloned segments or reassorted viruses. Conclusions HRM is a highly suitable method for the rapid and precise characterisation of cloned influenza A genomes. PMID:24028349

  16. The Impacts of Heating Strategy on Soil Moisture Estimation Using Actively Heated Fiber Optics.

    PubMed

    Dong, Jianzhi; Agliata, Rosa; Steele-Dunne, Susan; Hoes, Olivier; Bogaard, Thom; Greco, Roberto; van de Giesen, Nick

    2017-09-13

    Several recent studies have highlighted the potential of Actively Heated Fiber Optics (AHFO) for high resolution soil moisture mapping. In AHFO, the soil moisture can be calculated from the cumulative temperature ( T cum ), the maximum temperature ( T max ), or the soil thermal conductivity determined from the cooling phase after heating ( λ ). This study investigates the performance of the T cum , T max and λ methods for different heating strategies, i.e., differences in the duration and input power of the applied heat pulse. The aim is to compare the three approaches and to determine which is best suited to field applications where the power supply is limited. Results show that increasing the input power of the heat pulses makes it easier to differentiate between dry and wet soil conditions, which leads to an improved accuracy. Results suggest that if the power supply is limited, the heating strength is insufficient for the λ method to yield accurate estimates. Generally, the T cum and T max methods have similar accuracy. If the input power is limited, increasing the heat pulse duration can improve the accuracy of the AHFO method for both of these techniques. In particular, extending the heating duration can significantly increase the sensitivity of T cum to soil moisture. Hence, the T cum method is recommended when the input power is limited. Finally, results also show that up to 50% of the cable temperature change during the heat pulse can be attributed to soil background temperature, i.e., soil temperature changed by the net solar radiation. A method is proposed to correct this background temperature change. Without correction, soil moisture information can be completely masked by the background temperature error.

  17. The Impacts of Heating Strategy on Soil Moisture Estimation Using Actively Heated Fiber Optics

    PubMed Central

    Dong, Jianzhi; Agliata, Rosa; Steele-Dunne, Susan; Hoes, Olivier; Bogaard, Thom; Greco, Roberto; van de Giesen, Nick

    2017-01-01

    Several recent studies have highlighted the potential of Actively Heated Fiber Optics (AHFO) for high resolution soil moisture mapping. In AHFO, the soil moisture can be calculated from the cumulative temperature (Tcum), the maximum temperature (Tmax), or the soil thermal conductivity determined from the cooling phase after heating (λ). This study investigates the performance of the Tcum, Tmax and λ methods for different heating strategies, i.e., differences in the duration and input power of the applied heat pulse. The aim is to compare the three approaches and to determine which is best suited to field applications where the power supply is limited. Results show that increasing the input power of the heat pulses makes it easier to differentiate between dry and wet soil conditions, which leads to an improved accuracy. Results suggest that if the power supply is limited, the heating strength is insufficient for the λ method to yield accurate estimates. Generally, the Tcum and Tmax methods have similar accuracy. If the input power is limited, increasing the heat pulse duration can improve the accuracy of the AHFO method for both of these techniques. In particular, extending the heating duration can significantly increase the sensitivity of Tcum to soil moisture. Hence, the Tcum method is recommended when the input power is limited. Finally, results also show that up to 50% of the cable temperature change during the heat pulse can be attributed to soil background temperature, i.e., soil temperature changed by the net solar radiation. A method is proposed to correct this background temperature change. Without correction, soil moisture information can be completely masked by the background temperature error. PMID:28902141

  18. Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors

    NASA Astrophysics Data System (ADS)

    Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.

    2007-12-01

    Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.

  19. A simple multi-scale Gaussian smoothing-based strategy for automatic chromatographic peak extraction.

    PubMed

    Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng

    2016-06-24

    Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Further Improvement of the RITS Code for Pulsed Neutron Bragg-edge Transmission Imaging

    NASA Astrophysics Data System (ADS)

    Sato, H.; Watanabe, K.; Kiyokawa, K.; Kiyanagi, R.; Hara, K. Y.; Kamiyama, T.; Furusaka, M.; Shinohara, T.; Kiyanagi, Y.

    The RITS code is a unique and powerful tool for a whole Bragg-edge transmission spectrum fitting analysis. However, it has had two major problems. Therefore, we have proposed methods to overcome these problems. The first issue is the difference in the crystallite size values between the diffraction and the Bragg-edge analyses. We found the reason was a different definition of the crystal structure factor. It affects the crystallite size because the crystallite size is deduced from the primary extinction effect which depends on the crystal structure factor. As a result of algorithm change, crystallite sizes obtained by RITS drastically approached to crystallite sizes obtained by Rietveld analyses of diffraction data; from 155% to 110%. The second issue is correction of the effect of background neutrons scattered from a specimen. Through neutron transport simulation studies, we found that the background components consist of forward Bragg scattering, double backward Bragg scattering, and thermal diffuse scattering. RITS with the background correction function which was developed through the simulation studies could well reconstruct various simulated and experimental transmission spectra, but refined crystalline microstructural parameters were often distorted. Finally, it was recommended to reduce the background by improving experimental conditions.

  1. A method for the in vivo measurement of americium-241 at long times post-exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neton, J.W.

    1988-01-01

    This study investigated an improved method for the quantitative measurement, calibration and calculation of {sup 241}Am organ burdens in humans. The techniques developed correct for cross-talk or count-rate contributions from surrounding and adjacent organ burdens and assures for the proper assignment of activity to the lungs, liver and skeleton. In order to predict the net count-rates for the measurement geometries of the skull, liver and lung, a background prediction method was developed. This method utilizes data obtained from the measurement of a group of control subjects. Based on this data, a linear prediction equation was developed for each measurement geometry.more » In order to correct for the cross-contributions among the various deposition loci, a series of surrogate human phantom structures were measured. The results of measurements of {sup 241}Am depositions in six exposure cases have been evaluated using these new techniques and have indicated that lung burden estimates could be in error by as much as 100 percent when corrections are not made for contributions to the count-rate from other organs.« less

  2. Three dimensional topography correction applied to magnetotelluric data from Sikkim Himalayas

    NASA Astrophysics Data System (ADS)

    Kumar, Sushil; Patro, Prasanta K.; Chaudhary, B. S.

    2018-06-01

    Magnetotelluric (MT) method is one of the powerful tools to investigate the deep crustal image of mountainous regions such as Himalayas. Topographic variations due to irregular surface terrain distort the resistivity curves and hence may not give accurate interpretation of magnetotelluric data. The two-dimensional (2-D) topographic effects in Transverse Magnetic (TM) mode is only galvanic whereas inductive in Transverse Electric (TE) mode, thus TM mode responses is much more important than TE mode responses in 2-D. In three-dimensional (3-D), the topography effect is both galvanic and inductive in each element of impedance tensor and hence the interpretation is complicated. In the present work, we investigate the effects of three-dimensional (3-D) topography for a hill model. This paper presents the impedance tensor correction algorithm to reduce the topographic effects in MT data. The distortion caused by surface topography effectively decreases by using homogeneous background resistivity in impedance correction method. In this study, we analyze the response of ramp, distance from topographic edges, conductive and resistive dykes. The new correction method is applied to the real data from Sikkim Himalayas, which brought out the true nature of the basement in this region.

  3. Perturbative study of the QCD phase diagram for heavy quarks at nonzero chemical potential: Two-loop corrections

    NASA Astrophysics Data System (ADS)

    Maelger, J.; Reinosa, U.; Serreau, J.

    2018-04-01

    We extend a previous investigation [U. Reinosa et al., Phys. Rev. D 92, 025021 (2015), 10.1103/PhysRevD.92.025021] of the QCD phase diagram with heavy quarks in the context of background field methods by including the two-loop corrections to the background field effective potential. The nonperturbative dynamics in the pure-gauge sector is modeled by a phenomenological gluon mass term in the Landau-DeWitt gauge-fixed action, which results in an improved perturbative expansion. We investigate the phase diagram at nonzero temperature and (real or imaginary) chemical potential. Two-loop corrections yield an improved agreement with lattice data as compared to the leading-order results. We also compare with the results of nonperturbative continuum approaches. We further study the equation of state as well as the thermodynamic stability of the system at two-loop order. Finally, using simple thermodynamic arguments, we show that the behavior of the Polyakov loops as functions of the chemical potential complies with their interpretation in terms of quark and antiquark free energies.

  4. Comparison of cast materials for the treatment of congenital idiopathic clubfoot using the Ponseti method: a prospective randomized controlled trial

    PubMed Central

    Hui, Catherine; Joughin, Elaine; Nettel-Aguirre, Alberto; Goldstein, Simon; Harder, James; Kiefer, Gerhard; Parsons, David; Brauer, Carmen; Howard, Jason

    2014-01-01

    Background The Ponseti method of congenital idiopathic clubfoot correction has traditionally specified plaster of Paris (POP) as the cast material of choice; however, there are negative aspects to using POP. We sought to determine the influence of cast material (POP v. semirigid fibreglass [SRF]) on clubfoot correction using the Ponseti method. Methods Patients were randomized to POP or SRF before undergoing the Ponseti method. The primary outcome measure was the number of casts required for clubfoot correction. Secondary outcome measures included the number of casts by severity, ease of cast removal, need for Achilles tenotomy, brace compliance, deformity relapse, need for repeat casting and need for ancillary surgical procedures. Results We enrolled 30 patients: 12 randomized to POP and 18 to SRF. There was no difference in the number of casts required for clubfoot correction between the groups (p = 0.13). According to parents, removal of POP was more difficult (p < 0.001), more time consuming (p < 0.001) and required more than 1 method (p < 0.001). At a final follow-up of 30.8 months, the mean times to deformity relapse requiring repeat casting, surgery or both were 18.7 and 16.4 months for the SRF and POP groups, respectively. Conclusion There was no significant difference in the number of casts required for correction of clubfoot between the 2 materials, but SRF resulted in a more favourable parental experience, which cannot be ignored as it may have a positive impact on psychological well-being despite the increased cost associated. PMID:25078929

  5. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    PubMed

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  6. Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images

    PubMed Central

    Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga

    2015-01-01

    0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273

  7. Formal concept analysis with background knowledge: a case study in paleobiological taxonomy of belemnites

    NASA Astrophysics Data System (ADS)

    Belohlavek, Radim; Kostak, Martin; Osicka, Petr

    2013-05-01

    We present a case study in identification of taxa in paleobiological data. Our approach utilizes formal concept analysis and is based on conceiving a taxon as a group of individuals sharing a collection of attributes. In addition to the incidence relation between individuals and their attributes, the method uses expert background knowledge regarding importance of attributes which helps to filter out correctly formed but paleobiologically irrelevant taxa. We present results of experiments carried out with belemnites-a group of extinct cephalopods which seems particularly suitable for such a purpose. We demonstrate that the methods are capable of revealing taxa and relationships among them that are relevant from a paleobiological point of view.

  8. Normalization, bias correction, and peak calling for ChIP-seq

    PubMed Central

    Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.

    2012-01-01

    Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Namikawa, Toshiya

    We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less

  10. Radiated BPF sound measurement of centrifugal compressor

    NASA Astrophysics Data System (ADS)

    Ohuchida, S.; Tanaka, K.

    2013-12-01

    A technique to measure radiated BPF sound from an automotive turbocharger compressor impeller is proposed in this paper. Where there are high-level background noises in the measurement environment, it is difficult to discriminate the target component from the background. Since the effort of measuring BPF sound was taken in a room with such condition in this study, no discrete BPF peak was initially found on the sound spectrum. Taking its directionality into consideration, a microphone covered with a parabolic cone was selected and using this technique, the discrete peak of BPF was clearly observed. Since the level of measured sound was amplified due to the area-integration effect, correction was needed to obtain the real level. To do so, sound measurements with and without a parabolic cone were conducted for the fixed source and their level differences were used as correction factors. Consideration is given to the sound propagation mechanism utilizing measured BPF as well as the result of a simple model experiment. The present method is generally applicable to sound measurements conducted with a high level of background noise.

  11. Correction of Multiple Canine Impactions by Mixed Straightwire and Cantilever Mechanics: A Case Report

    PubMed Central

    Iodice, Giorgio; d'Antò, Vincenzo; Riccitiello, Francesco; Pellegrino, Gioacchino; Valletta, Rosa

    2014-01-01

    Background. This case report describes the orthodontic treatment of a woman, aged 17 years, with a permanent dentition, brachyfacial typology, Angle Class I, with full impaction of two canines (13,33), and a severe ectopy of the maxillary left canine. Her main compliant was the position of the ectopic teeth. Methods. Straightwire fixed appliances, together with cantilever mechanics, were used to correct the impaired occlusion and to obtain an ideal torque control. Results and Conclusion. The treatment objectives were achieved in 26 months of treatment. The impactions were fully corrected with an optimal torque. The cantilever mechanics succeeded in obtaining tooth repositioning in a short lapse of time. After treatment, the dental alignment was stable. PMID:25140261

  12. Correction of multiple canine impactions by mixed straightwire and cantilever mechanics: a case report.

    PubMed

    Paduano, Sergio; Cioffi, Iacopo; Iodice, Giorgio; d'Antò, Vincenzo; Riccitiello, Francesco; Pellegrino, Gioacchino; Valletta, Rosa

    2014-01-01

    Background. This case report describes the orthodontic treatment of a woman, aged 17 years, with a permanent dentition, brachyfacial typology, Angle Class I, with full impaction of two canines (13,33), and a severe ectopy of the maxillary left canine. Her main compliant was the position of the ectopic teeth. Methods. Straightwire fixed appliances, together with cantilever mechanics, were used to correct the impaired occlusion and to obtain an ideal torque control. Results and Conclusion. The treatment objectives were achieved in 26 months of treatment. The impactions were fully corrected with an optimal torque. The cantilever mechanics succeeded in obtaining tooth repositioning in a short lapse of time. After treatment, the dental alignment was stable.

  13. Observation-Corrected Precipitation Estimates in GEOS-5

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; Liu, Qing

    2014-01-01

    Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.

  14. Robust recognition of degraded machine-printed characters using complementary similarity measure and error-correction learning

    NASA Astrophysics Data System (ADS)

    Hagita, Norihiro; Sawaki, Minako

    1995-03-01

    Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.

  15. Relativistic electron plasma oscillations in an inhomogeneous ion background

    NASA Astrophysics Data System (ADS)

    Karmakar, Mithun; Maity, Chandan; Chakrabarti, Nikhil

    2018-06-01

    The combined effect of relativistic electron mass variation and background ion inhomogeneity on the phase mixing process of large amplitude electron oscillations in cold plasmas have been analyzed by using Lagrangian coordinates. An inhomogeneity in the ion density is assumed to be time-independent but spatially periodic, and a periodic perturbation in the electron density is considered as well. An approximate space-time dependent solution is obtained in the weakly-relativistic limit by employing the Bogolyubov and Krylov method of averaging. It is shown that the phase mixing process of relativistically corrected electron oscillations is strongly influenced by the presence of a pre-existing ion density ripple in the plasma background.

  16. High-energy electrons from the muon decay in orbit: Radiative corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szafron, Robert; Czarnecki, Andrzej

    2015-12-07

    We determine the Ο(α) correction to the energy spectrum of electrons produced in the decay of muons bound in atoms. We focus on the high-energy end of the spectrum that constitutes a background for the muon-electron conversion and will be precisely measured by the upcoming experiments Mu2e and COMET. As a result, the correction suppresses the background by about 20%.

  17. Increasing the Efficiency of Electron Microprobe Measurements of Minor and Trace Elements in Rutile

    NASA Astrophysics Data System (ADS)

    Neill, O. K.; Mattinson, C. G.; Donovan, J.; Hernández Uribe, D.; Sains, A.

    2016-12-01

    Minor and trace element contents of rutile, an accessory mineral found in numerous lithologic settings, has many applications for interpreting earth systems. While these applications vary widely, they share a need for precise and accurate elemental measurements. The electron microprobe can be used to measure rutile compositions, although long X-ray counting times are necessary to achieve acceptable precision. Continuum ("background") intensity can be estimated using the iterative Mean Atomic Number (MAN) method of Donovan and Tingle (1996), obviating the need for direct off-peak background measurements, and reducing counting times by half. For this study, several natural and synthetic rutiles were measured by electron microprobe. Data was collected once but reduced twice, using off-peak and an MAN background corrections, allowing direct comparison of the two methods without influence of other variables (counting time, analyte homogeneity, beam current, calibration standards, etc.). These measurements show that, if a "blank" correction (Donovan et al., 2011, 2016) is used, minor and trace elements of interest can be measured in rutile using the MAN background method in half the time of traditional off-peak measurements, without sacrificing accuracy or precision (Figure 1). This method has already been applied to Zr-in-rutile thermometry of ultra-high pressure metamorphic rocks from the North Qaidam terrane in northwest China. Finally, secondary fluorescence of adjacent phases by continuum X-rays can lead to artificially elevated concentrations. For example, when measuring Zr, care should be taken to avoid analytical spots within 100 microns of zircon or baddeleyite crystals. References: 1) J.J. Donovan and T.N Tingle (1996) J. Microscopy, 2(1), 1-7 2) J.J. Donovan, H.A. Lowers, and B.G. Rusk (2011) Am. Mineral., 96, 274­282 3) J.J. Donovan, J.W. Singer and J.T. Armstrong (2016) Am. Mineral., 101, 1839-1853 4) G.L. Lovizotto et al. (2009) Chem. Geol., 261, 346-369

  18. Efficient genomic correction methods in human iPS cells using CRISPR-Cas9 system.

    PubMed

    Li, Hongmei Lisa; Gee, Peter; Ishida, Kentaro; Hotta, Akitsu

    2016-05-15

    Precise gene correction using the CRISPR-Cas9 system in human iPS cells holds great promise for various applications, such as the study of gene functions, disease modeling, and gene therapy. In this review article, we summarize methods for effective editing of genomic sequences of iPS cells based on our experiences correcting dystrophin gene mutations with the CRISPR-Cas9 system. Designing specific sgRNAs as well as having efficient transfection methods and proper detection assays to assess genomic cleavage activities are critical for successful genome editing in iPS cells. In addition, because iPS cells are fragile by nature when dissociated into single cells, a step-by-step confirmation during the cell recovery process is recommended to obtain an adequate number of genome-edited iPS cell clones. We hope that the techniques described here will be useful for researchers from diverse backgrounds who would like to perform genome editing in iPS cells. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Clock genes explain large proportion of phenotypic variance in systolic blood pressure and this control is not modified by environmental temperature

    USDA-ARS?s Scientific Manuscript database

    BACKGROUND: Diurnal variation in blood pressure (BP) is regulated, in part, by an endogenous circadian clock; however, few human studies have identified associations between clock genes and BP. Accounting for environmental temperature may be necessary to correct for seasonal bias. METHODS: We examin...

  20. Optimization of cDNA microarrays procedures using criteria that do not rely on external standards.

    PubMed

    Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Laegreid, Astrid

    2007-10-18

    The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish.

  1. An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats

    PubMed Central

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert

    2012-01-01

    Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373

  2. Quantum Gravity Effects on Hawking Radiation of Schwarzschild-de Sitter Black Holes

    NASA Astrophysics Data System (ADS)

    Singh, T. Ibungochouba; Meitei, I. Ablu; Singh, K. Yugindro

    2017-08-01

    The correction of Hawking temperature of Schwarzschild-de Sitter (SdS) black hole is investigated using the generalized Klein-Gordon equation and the generalized Dirac equation by taking the quantum gravity effects into account. We derive the corrected Hawking temperatures for scalar particles and fermions crossing the event horizon. The quantum gravity effects prevent the rise of temperature in the SdS black hole. Besides correction of Hawking temperature, the Hawking radiation of SdS black hole is also investigated using massive particles tunneling method. By considering self gravitation effect of the emitted particles and the space time background to be dynamical, it is also shown that the tunneling rate is related to the change of Bekenstein-Hawking entropy and small correction term (1 + 2 β m 2). If the energy and the angular momentum are taken to be conserved, the derived emission spectrum deviates from the pure thermal spectrum. This result gives a correction to the Hawking radiation and is also in agreement with the result of Parikh and Wilczek.

  3. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine

    PubMed Central

    Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng

    2016-01-01

    Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555

  4. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine.

    PubMed

    Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng

    2016-01-01

    Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.

  5. Survey of background scattering from materials found in small-angle neutron scattering.

    PubMed

    Barker, J G; Mildner, D F R

    2015-08-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.

  6. Survey of background scattering from materials found in small-angle neutron scattering

    PubMed Central

    Barker, J. G.; Mildner, D. F. R.

    2015-01-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088

  7. Pavement crack detection combining non-negative feature with fast LoG in complex scene

    NASA Astrophysics Data System (ADS)

    Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu

    2015-12-01

    Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.

  8. LWIR pupil imaging and longer-term calibration stability

    NASA Astrophysics Data System (ADS)

    LeVan, Paul D.; Sakoglu, Ünal

    2016-09-01

    A previous paper described LWIR pupil imaging, and an improved understanding of the behavior of this type of sensor for which the high-sensitivity focal plane array (FPA) operated at higher flux levels includes a reversal in signal integration polarity. We have since considered a candidate methodology for efficient, long-term calibration stability that exploits the following two properties of pupil imaging: (1) a fixed pupil position on the FPA, and (2) signal levels from the scene imposed on significant but fixed LWIR background levels. These two properties serve to keep each pixel operating over a limited dynamic range that corresponds to its location in the pupil and to the signal levels generated at this location by the lower and upper calibration flux levels. Exploiting this property for which each pixel of the Pupil Imager operates over its limited dynamic range, the signal polarity reversal between low and high flux pixels, which occurs for a circular region of pixels near the upper edges of the pupil illumination profile, can be rectified to unipolar integration with a two-level non-uniformity correction (NUC). Images corrected real-time with standard non-uniformity correction (NUC) techniques, are still subject to longer-term drifts in pixel offsets between recalibrations. Long-term calibration stability might then be achieved using either a scene-based non-uniformity correction approach, or with periodic repointing for off-source background estimation and subtraction. Either approach requires dithering of the field of view, by sub-pixel amounts for the first method, or by large off-source motions outside the 0.38 milliradian FOV for the latter method. We report on the results of investigations along both these lines.

  9. Natural Language Processing As an Alternative to Manual Reporting of Colonoscopy Quality Metrics

    PubMed Central

    RAJU, GOTTUMUKKALA S.; LUM, PHILLIP J.; SLACK, REBECCA; THIRUMURTHI, SELVI; LYNCH, PATRICK M.; MILLER, ETHAN; WESTON, BRIAN R.; DAVILA, MARTA L.; BHUTANI, MANOOP S.; SHAFI, MEHNAZ A.; BRESALIER, ROBERT S.; DEKOVICH, ALEXANDER A.; LEE, JEFFREY H.; GUHA, SUSHOVAN; PANDE, MALA; BLECHACZ, BORIS; RASHID, ASIF; ROUTBORT, MARK; SHUTTLESWORTH, GLADIS; MISHRA, LOPA; STROEHLEIN, JOHN R.; ROSS, WILLIAM A.

    2015-01-01

    BACKGROUND & AIMS The adenoma detection rate (ADR) is a quality metric tied to interval colon cancer occurrence. However, manual extraction of data to calculate and track the ADR in clinical practice is labor-intensive. To overcome this difficulty, we developed a natural language processing (NLP) method to identify patients, who underwent their first screening colonoscopy, identify adenomas and sessile serrated adenomas (SSA). We compared the NLP generated results with that of manual data extraction to test the accuracy of NLP, and report on colonoscopy quality metrics using NLP. METHODS Identification of screening colonoscopies using NLP was compared with that using the manual method for 12,748 patients who underwent colonoscopies from July 2010 to February 2013. Also, identification of adenomas and SSAs using NLP was compared with that using the manual method with 2259 matched patient records. Colonoscopy ADRs using these methods were generated for each physician. RESULTS NLP correctly identified 91.3% of the screening examinations, whereas the manual method identified 87.8% of them. Both the manual method and NLP correctly identified examinations of patients with adenomas and SSAs in the matched records almost perfectly. Both NLP and manual method produce comparable values for ADR for each endoscopist as well as the group as a whole. CONCLUSIONS NLP can correctly identify screening colonoscopies, accurately identify adenomas and SSAs in a pathology database, and provide real-time quality metrics for colonoscopy. PMID:25910665

  10. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    PubMed

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  11. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    NASA Astrophysics Data System (ADS)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  12. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  13. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less

  14. Corrective responses in human food intake identified from an analysis of 7-d food-intake records2

    PubMed Central

    Bray, George A; Flatt, Jean-Pierre; Volaufova, Julia; DeLany, James P; Champagne, Catherine M

    2009-01-01

    Background We tested the hypothesis that ad libitum food intake shows corrective responses over periods of 1–5 d. Design This was a prospective study of food intake in women. Methods Two methods, a weighed food intake and a measured food intake, were used to determine daily nutrient intake during 2 wk in 20 women. Energy expenditure with the use of doubly labeled water was done contemporaneously with the weighed food-intake record. The daily deviations in macronutrient and energy intake from the average 7-d values were compared with the deviations observed 1, 2, 3, 4, and 5 d later to estimate the corrective responses. Results Both methods of recording food intake gave similar patterns of macronutrient and total energy intakes and for deviations from average intakes. The intraindividual CVs for energy intake ranged from ±12% to ±47% with an average of ±25%. Reported energy intake was 85.5–95.0% of total energy expenditure determined by doubly labeled water. Significant corrective responses were observed in food intakes with a 3- to 4-d lag that disappeared when data were randomized within each subject. Conclusions Human beings show corrective responses to deviations from average energy and macronutrient intakes with a lag time of 3–4 d, but not 1–2 d. This suggests that short-term studies may fail to recognize important signals of food-intake regulation that operate over several days. These corrective responses probably play a crucial role in bringing about weight stability. PMID:19064509

  15. Application of phasor plot and autofluorescence correction for study of heterogeneous cell population

    PubMed Central

    Szmacinski, Henryk; Toshchakov, Vladimir; Lakowicz, Joseph R.

    2014-01-01

    Abstract. Protein-protein interactions in cells are often studied using fluorescence resonance energy transfer (FRET) phenomenon by fluorescence lifetime imaging microscopy (FLIM). Here, we demonstrate approaches to the quantitative analysis of FRET in cell population in a case complicated by a highly heterogeneous donor expression, multiexponential donor lifetime, large contribution of cell autofluorescence, and significant presence of unquenched donor molecules that do not interact with the acceptor due to low affinity of donor-acceptor binding. We applied a multifrequency phasor plot to visualize FRET FLIM data, developed a method for lifetime background correction, and performed a detailed time-resolved analysis using a biexponential model. These approaches were applied to study the interaction between the Toll Interleukin-1 receptor (TIR) domain of Toll-like receptor 4 (TLR4) and the decoy peptide 4BB. TLR4 was fused to Cerulean fluorescent protein (Cer) and 4BB peptide was labeled with Bodipy TMRX (BTX). Phasor displays for multifrequency FLIM data are presented. The analytical procedure for lifetime background correction is described and the effect of correction on FLIM data is demonstrated. The absolute FRET efficiency was determined based on the phasor plot display and multifrequency FLIM data analysis. The binding affinity between TLR4-Cer (donor) and decoy peptide 4BB-BTX (acceptor) was estimated in a heterogeneous HeLa cell population. PMID:24770662

  16. Review of approaches to the recording of background lesions in toxicologic pathology studies in rats.

    PubMed

    McInnes, E F; Scudamore, C L

    2014-08-17

    Pathological evaluation of lesions caused directly by xenobiotic treatment must always take into account the recognition of background (incidental) findings. Background lesions can be congenital or hereditary, histological variations, changes related to trauma or normal aging and physiologic or hormonal changes. This review focuses on the importance and correct approach to recording of background changes and includes discussion on sources of variability in background changes, the correct use of terminology, the concept of thresholds, historical control data, diagnostic drift, blind reading of slides, scoring and artifacts. The review is illustrated with background lesions in Sprague Dawley and Wistar rats. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. How Effective Is the Cognitive Interview When Used with Adults with Intellectual Disabilities Specifically with Conversation Recall?

    ERIC Educational Resources Information Center

    Clarke, Jason; Prescott, Katherine; Milne, Rebecca

    2013-01-01

    Background: The cognitive interview (CI) has been shown to increase correct memory recall of a diverse range of participant types, without an increase in the number of incorrect or confabulated details. However, it has rarely been examined for use with adults with intellectual disability. Measures and Method: This study compared the memory recall…

  18. Effect of Maxillary Osteotomy on Speech in Cleft Lip and Palate: Perceptual Outcomes of Velopharyngeal Function

    ERIC Educational Resources Information Center

    Pereira, Valerie J.; Sell, Debbie; Tuomainen, Jyrki

    2013-01-01

    Background: Abnormal facial growth is a well-known sequelae of cleft lip and palate (CLP) resulting in maxillary retrusion and a class III malocclusion. In 10-50% of cases, surgical correction involving advancement of the maxilla typically by osteotomy methods is required and normally undertaken in adolescence when facial growth is complete.…

  19. Comparative Analysis of Combined (First Anterior, Then Posterior) Versus Only Posterior Approach for Treating Severe Scoliosis

    PubMed Central

    Hero, Nikša; Vengust, Rok; Topolovec, Matevž

    2017-01-01

    Study Design. A retrospective, one center, institutional review board approved study. Objective. Two methods of operative treatments were compared in order to evaluate whether a two-stage approach is justified for correction of bigger idiopathic scoliosis curves. Two stage surgery, combined anterior approach in first operation and posterior instrumentation and correction in the second operation. One stage surgery included only posterior instrumentation and correction. Summary of Background Data. Studies comparing two-stage approach and only posterior approach are rather scarce, with shorter follow up and lack of clinical data. Methods. Three hundred forty eight patients with idiopathic scoliosis were operated using Cotrel–Dubousset (CD) hybrid instrumentation with pedicle screw and hooks. Only patients with curvatures more than or equal to 61° were analyzed and divided in two groups: two stage surgery (N = 30) and one stage surgery (N = 46). The radiographic parameters as well as duration of operation, hospitalization time, and number of segments included in fusion and clinical outcome were analyzed. Results. No statistically significant difference was observed in correction between two-stage group (average correction 69%) and only posterior approach group (average correction 66%). However, there were statistically significant differences regarding hospitalization time, duration of the surgery, and the number of instrumented segments. Conclusion. Two-stage surgery has only a limited advantage in terms of postoperative correction angle compared with the posterior approach. Posterior instrumentation and correction is satisfactory, especially taking into account that the patient is subjected to only one surgery. Level of Evidence: 3 PMID:28125525

  20. What is the lifetime risk of developing cancer?: the effect of adjusting for multiple primaries

    PubMed Central

    Sasieni, P D; Shelton, J; Ormiston-Smith, N; Thomson, C S; Silcocks, P B

    2011-01-01

    Background: The ‘lifetime risk' of cancer is generally estimated by combining current incidence rates with current all-cause mortality (‘current probability' method) rather than by describing the experience of a birth cohort. As individuals may get more than one type of cancer, what is generally estimated is the average (mean) number of cancers over a lifetime. This is not the same as the probability of getting cancer. Methods: We describe a method for estimating lifetime risk that corrects for the inclusion of multiple primary cancers in the incidence rates routinely published by cancer registries. The new method applies cancer incidence rates to the estimated probability of being alive without a previous cancer. The new method is illustrated using data from the Scottish Cancer Registry and is compared with ‘gold-standard' estimates that use (unpublished) data on first primaries. Results: The effect of this correction is to make the estimated ‘lifetime risk' smaller. The new estimates are extremely similar to those obtained using incidence based on first primaries. The usual ‘current probability' method considerably overestimates the lifetime risk of all cancers combined, although the correction for any single cancer site is minimal. Conclusion: Estimation of the lifetime risk of cancer should either be based on first primaries or should use the new method. PMID:21772332

  1. Effect of sample stratification on dairy GWAS results

    PubMed Central

    2012-01-01

    Background Artificial insemination and genetic selection are major factors contributing to population stratification in dairy cattle. In this study, we analyzed the effect of sample stratification and the effect of stratification correction on results of a dairy genome-wide association study (GWAS). Three methods for stratification correction were used: the efficient mixed-model association expedited (EMMAX) method accounting for correlation among all individuals, a generalized least squares (GLS) method based on half-sib intraclass correlation, and a principal component analysis (PCA) approach. Results Historical pedigree data revealed that the 1,654 contemporary cows in the GWAS were all related when traced through approximately 10–15 generations of ancestors. Genome and phenotype stratifications had a striking overlap with the half-sib structure. A large elite half-sib family of cows contributed to the detection of favorable alleles that had low frequencies in the general population and high frequencies in the elite cows and contributed to the detection of X chromosome effects. All three methods for stratification correction reduced the number of significant effects. EMMAX method had the most severe reduction in the number of significant effects, and the PCA method using 20 principal components and GLS had similar significance levels. Removal of the elite cows from the analysis without using stratification correction removed many effects that were also removed by the three methods for stratification correction, indicating that stratification correction could have removed some true effects due to the elite cows. SNP effects with good consensus between different methods and effect size distributions from USDA’s Holstein genomic evaluation included the DGAT1-NIBP region of BTA14 for production traits, a SNP 45kb upstream from PIGY on BTA6 and two SNPs in NIBP on BTA14 for protein percentage. However, most of these consensus effects had similar frequencies in the elite and average cows. Conclusions Genetic selection and extensive use of artificial insemination contributed to overlapped genome, pedigree and phenotype stratifications. The presence of an elite cluster of cows was related to the detection of rare favorable alleles that had high frequencies in the elite cluster and low frequencies in the remaining cows. Methods for stratification correction could have removed some true effects associated with genetic selection. PMID:23039970

  2. Bayesian penalized-likelihood reconstruction algorithm suppresses edge artifacts in PET reconstruction based on point-spread-function.

    PubMed

    Yamaguchi, Shotaro; Wagatsuma, Kei; Miwa, Kenta; Ishii, Kenji; Inoue, Kazumasa; Fukushi, Masahiro

    2018-03-01

    The Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear. Spheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [ 18 F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [ 18 F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200-800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured. The 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot. BPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.

  4. CMB internal delensing with general optimal estimator for higher-order correlations

    DOE PAGES

    Namikawa, Toshiya

    2017-05-24

    We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less

  5. Demonstration of electronic design automation flow for massively parallel e-beam lithography

    NASA Astrophysics Data System (ADS)

    Brandt, Pieter; Belledent, Jérôme; Tranquillin, Céline; Figueiro, Thiago; Meunier, Stéfanie; Bayle, Sébastien; Fay, Aurélien; Milléquant, Matthieu; Icard, Beatrice; Wieland, Marco

    2014-07-01

    For proximity effect correction in 5 keV e-beam lithography, three elementary building blocks exist: dose modulation, geometry (size) modulation, and background dose addition. Combinations of these three methods are quantitatively compared in terms of throughput impact and process window (PW). In addition, overexposure in combination with negative bias results in PW enhancement at the cost of throughput. In proximity effect correction by over exposure (PEC-OE), the entire layout is set to fixed dose and geometry sizes are adjusted. In PEC-dose to size (DTS) both dose and geometry sizes are locally optimized. In PEC-background (BG), a background is added to correct the long-range part of the point spread function. In single e-beam tools (Gaussian or Shaped-beam), throughput heavily depends on the number of shots. In raster scan tools such as MAPPER Lithography's FLX 1200 (MATRIX platform) this is not the case and instead of pattern density, the maximum local dose on the wafer is limiting throughput. The smallest considered half-pitch is 28 nm, which may be considered the 14-nm node for Metal-1 and the 10-nm node for the Via-1 layer, achieved in a single exposure with e-beam lithography. For typical 28-nm-hp Metal-1 layouts, it was shown that dose latitudes (size of process window) of around 10% are realizable with available PEC methods. For 28-nm-hp Via-1 layouts this is even higher at 14% and up. When the layouts do not reach the highest densities (up to 10∶1 in this study), PEC-BG and PEC-OE provide the capability to trade throughput for dose latitude. At the highest densities, PEC-DTS is required for proximity correction, as this method adjusts both geometry edges and doses and will reduce the dose at the densest areas. For 28-nm-hp lines critical dimension (CD), hole&dot (CD) and line ends (edge placement error), the data path errors are typically 0.9, 1.0 and 0.7 nm (3σ) and below, respectively. There is not a clear data path performance difference between the investigated PEC methods. After the simulations, the methods were successfully validated in exposures on a MAPPER pre-alpha tool. A 28-nm half pitch Metal-1 and Via-1 layouts show good performance in resist that coincide with the simulation result. Exposures of soft-edge stitched layouts show that beam-to-beam position errors up to ±7 nm specified for FLX 1200 show no noticeable impact on CD. The research leading to these results has been performed in the frame of the industrial collaborative consortium IMAGINE.

  6. Advanced Demonstration of Motion Correction for Ship-to-Ship Passive Inspections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Boehnen, Chris Bensing; Ernst, Joseph

    2013-09-30

    Passive radiation detection is a key tool for detecting illicit nuclear materials. In maritime applications it is most effective against small vessels where attenuation is of less concern. Passive imaging provides: discrimination between localized (threat) and distributed (non-threat) sources, removal of background fluctuations due to nearby shorelines and structures, source localization to an individual craft in crowded waters, and background subtracted spectra. Unfortunately, imaging methods cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing sensitivity. This is particularly true for the smaller water craft where passive inspections aremore » most valuable. In this project we performed tests and improved the performance of an instrument (developed earlier under, “Motion Correction for Ship-to-Ship Passive Inspections”) that uses automated tracking of a target vessel in visible-light images to generate a 3D radiation map of the target vessel from data obtained using a gamma-ray imager.« less

  7. Background correction in separation techniques hyphenated to high-resolution mass spectrometry - Thorough correction with mass spectrometry scans recorded as profile spectra.

    PubMed

    Erny, Guillaume L; Acunha, Tanize; Simó, Carolina; Cifuentes, Alejandro; Alves, Arminda

    2017-04-07

    Separation techniques hyphenated with high-resolution mass spectrometry have been a true revolution in analytical separation techniques. Such instruments not only provide unmatched resolution, but they also allow measuring the peaks accurate masses that permit identifying monoisotopic formulae. However, data files can be large, with a major contribution from background noise and background ions. Such unnecessary contribution to the overall signal can hide important features as well as decrease the accuracy of the centroid determination, especially with minor features. Thus, noise and baseline correction can be a valuable pre-processing step. The methodology that is described here, unlike any other approach, is used to correct the original dataset with the MS scans recorded as profiles spectrum. Using urine metabolic studies as examples, we demonstrate that this thorough correction reduces the data complexity by more than 90%. Such correction not only permits an improved visualisation of secondary peaks in the chromatographic domain, but it also facilitates the complete assignment of each MS scan which is invaluable to detect possible comigration/coeluting species. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Assessing Feedback in a Mobile Videogame

    PubMed Central

    Brand, Leah; Beltran, Alicia; Hughes, Sheryl; O'Connor, Teresia; Baranowski, Janice; Nicklas, Theresa; Chen, Tzu-An; Dadabhoy, Hafza R.; Diep, Cassandra S.; Buday, Richard

    2016-01-01

    Abstract Background: Player feedback is an important part of serious games, although there is no consensus regarding its delivery or optimal content. “Mommio” is a serious game designed to help mothers motivate their preschoolers to eat vegetables. The purpose of this study was to assess optimal format and content of player feedback for use in “Mommio.” Materials and Methods: The current study posed 36 potential “Mommio” gameplay feedback statements to 20 mothers using a Web survey and interview. Mothers were asked about the meaning and helpfulness of each feedback statement. Results: Several themes emerged upon thematic analysis, including identifying an effective alternative in the case of corrective feedback, avoiding vague wording, using succinct and correct grammar, avoiding provocation of guilt, and clearly identifying why players' game choice was correct or incorrect. Conclusions: Guidelines are proposed for future feedback statements. PMID:27058403

  9. Number-counts slope estimation in the presence of Poisson noise

    NASA Technical Reports Server (NTRS)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

  10. Holographic corrections to the Veneziano amplitude

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-08-01

    We propose a holographic computation of the 2 → 2 meson scattering in a curved string background, dual to a QCD-like theory. We recover the Veneziano amplitude and compute a perturbative correction due to the background curvature. The result implies a small deviation from a linear trajectory, which is a requirement of the UV regime of QCD.

  11. Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.

  12. Elementary review of electron microprobe techniques and correction requirements

    NASA Technical Reports Server (NTRS)

    Hart, R. K.

    1968-01-01

    Report contains requirements for correction of instrumented data on the chemical composition of a specimen, obtained by electron microprobe analysis. A condensed review of electron microprobe techniques is presented, including background material for obtaining X ray intensity data corrections and absorption, atomic number, and fluorescence corrections.

  13. Geometric correction method for 3d in-line X-ray phase contrast image reconstruction

    PubMed Central

    2014-01-01

    Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768

  14. WDR-PK-AK-018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollister, R

    2009-08-26

    Method - CES SOP-HW-P556 'Field and Bulk Gamma Analysis'. Detector - High-purity germanium, 40% relative efficiency. Calibration - The detector was calibrated on February 8, 2006 using a NIST-traceable sealed source, and the calibration was verified using an independent sealed source. Count Time and Geometry - The sample was counted for 20 minutes at 72 inches from the detector. A lead collimator was used to limit the field-of-view to the region of the sample. The drum was rotated 180 degrees halfway through the count time. Date and Location of Scans - June 1,2006 in Building 235 Room 1136. Spectral Analysismore » Spectra were analyzed with ORTEC GammaVision software. Matrix and geometry corrections were calculated using OR TEC Isotopic software. A background spectrum was measured at the counting location. No man-made radioactivity was observed in the background. Results were determined from the sample spectra without background subtraction. Minimum detectable activities were calculated by the Nureg 4.16 method. Results - Detected Pu-238, Pu-239, Am-241 and Am-243.« less

  15. Improving anatomical mapping of complexly deformed anatomy for external beam radiotherapy and brachytherapy dose accumulation in cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vásquez Osorio, Eliana M., E-mail: e.vasquezosorio@erasmusmc.nl; Kolkman-Deurloo, Inger-Karine K.; Schuring-Pereira, Monica

    Purpose: In the treatment of cervical cancer, large anatomical deformations, caused by, e.g., tumor shrinkage, bladder and rectum filling changes, organ sliding, and the presence of the brachytherapy (BT) applicator, prohibit the accumulation of external beam radiotherapy (EBRT) and BT dose distributions. This work proposes a structure-wise registration with vector field integration (SW+VF) to map the largely deformed anatomies between EBRT and BT, paving the way for 3D dose accumulation between EBRT and BT. Methods: T2w-MRIs acquired before EBRT and as a part of the MRI-guided BT procedure for 12 cervical cancer patients, along with the manual delineations of themore » bladder, cervix-uterus, and rectum-sigmoid, were used for this study. A rigid transformation was used to align the bony anatomy in the MRIs. The proposed SW+VF method starts by automatically segmenting features in the area surrounding the delineated organs. Then, each organ and feature pair is registered independently using a feature-based nonrigid registration algorithm developed in-house. Additionally, a background transformation is calculated to account for areas far from all organs and features. In order to obtain one transformation that can be used for dose accumulation, the organ-based, feature-based, and the background transformations are combined into one vector field using a weighted sum, where the contribution of each transformation can be directly controlled by its extent of influence (scope size). The optimal scope sizes for organ-based and feature-based transformations were found by an exhaustive analysis. The anatomical correctness of the mapping was independently validated by measuring the residual distances after transformation for delineated structures inside the cervix-uterus (inner anatomical correctness), and for anatomical landmarks outside the organs in the surrounding region (outer anatomical correctness). The results of the proposed method were compared with the results of the rigid transformation and nonrigid registration of all structures together (AST). Results: The rigid transformation achieved a good global alignment (mean outer anatomical correctness of 4.3 mm) but failed to align the deformed organs (mean inner anatomical correctness of 22.4 mm). Conversely, the AST registration produced a reasonable alignment for the organs (6.3 mm) but not for the surrounding region (16.9 mm). SW+VF registration achieved the best results for both regions (3.5 and 3.4 mm for the inner and outer anatomical correctness, respectively). All differences were significant (p < 0.02, Wilcoxon rank sum test). Additionally, optimization of the scope sizes determined that the method was robust for a large range of scope size values. Conclusions: The novel SW+VF method improved the mapping of large and complex deformations observed between EBRT and BT for cervical cancer patients. Future studies that quantify the mapping error in terms of dose errors are required to test the clinical applicability of dose accumulation by the SW+VF method.« less

  16. Time-of-day Corrections to Aircraft Noise Metrics

    NASA Technical Reports Server (NTRS)

    Clevenson, S. (Editor); Shepherd, W. T. (Editor)

    1980-01-01

    The historical and background aspects of time-of-day corrections as well as the evidence supporting these corrections are discussed. Health, welfare, and economic impacts, needs a criteria, and government policy and regulation, are also reported.

  17. Analytical-Based Partial Volume Recovery in Mouse Heart Imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; deKemp, Robert A.

    2011-02-01

    Positron emission tomography (PET) is a powerful imaging modality that has the ability to yield quantitative images of tracer activity. Physical phenomena such as photon scatter, photon attenuation, random coincidences and spatial resolution limit quantification potential and must be corrected to preserve the accuracy of reconstructed images. This study focuses on correcting the partial volume effects that arise in mouse heart imaging when resolution is insufficient to resolve the true tracer distribution in the myocardium. The correction algorithm is based on fitting 1D profiles through the myocardium in gated PET images to derive myocardial contours along with blood, background and myocardial activity. This information is interpolated onto a 2D grid and convolved with the tomograph's point spread function to derive regional recovery coefficients enabling partial volume correction. The point spread function was measured by placing a line source inside a small animal PET scanner. PET simulations were created based on noise properties measured from a reconstructed PET image and on the digital MOBY phantom. The algorithm can estimate the myocardial activity to within 5% of the truth when different wall thicknesses, backgrounds and noise properties are encountered that are typical of healthy FDG mouse scans. The method also significantly improves partial volume recovery in simulated infarcted tissue. The algorithm offers a practical solution to the partial volume problem without the need for co-registered anatomic images and offers a basis for improved quantitative 3D heart imaging.

  18. Correction of Spatial Bias in Oligonucleotide Array Data

    PubMed Central

    Lemieux, Sébastien

    2013-01-01

    Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083

  19. Principles of PET/MR Imaging.

    PubMed

    Disselhorst, Jonathan A; Bezrukov, Ilja; Kolb, Armin; Parl, Christoph; Pichler, Bernd J

    2014-06-01

    Hybrid PET/MR systems have rapidly progressed from the prototype stage to systems that are increasingly being used in the clinics. This review provides an overview of developments in hybrid PET/MR systems and summarizes the current state of the art in PET/MR instrumentation, correction techniques, and data analysis. The strong magnetic field requires considerable changes in the manner by which PET images are acquired and has led, among others, to the development of new PET detectors, such as silicon photomultipliers. During more than a decade of active PET/MR development, several system designs have been described. The technical background of combined PET/MR systems is explained and related challenges are discussed. The necessity for PET attenuation correction required new methods based on MR data. Therefore, an overview of recent developments in this field is provided. Furthermore, MR-based motion correction techniques for PET are discussed, as integrated PET/MR systems provide a platform for measuring motion with high temporal resolution without additional instrumentation. The MR component in PET/MR systems can provide functional information about disease processes or brain function alongside anatomic images. Against this background, we point out new opportunities for data analysis in this new field of multimodal molecular imaging. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  20. Recursive least squares background prediction of univariate syndromic surveillance data

    PubMed Central

    2009-01-01

    Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We compare the detection performance of this algorithm to the W2 method recently implemented at CDC. The modified RLS method gives consistently better sensitivity at multiple background alert rates, and we recommend that it should be considered for routine application in bio-surveillance systems. PMID:19149886

  1. Robust finger vein ROI localization based on flexible segmentation.

    PubMed

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-10-24

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

  2. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    PubMed Central

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-01-01

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769

  3. Improvements in Technique of NMR Imaging and NMR Diffusion Measurements in the Presence of Background Gradients.

    NASA Astrophysics Data System (ADS)

    Lian, Jianyu

    In this work, modification of the cosine current distribution rf coil, PCOS, has been introduced and tested. The coil produces a very homogeneous rf magnetic field, and it is inexpensive to build and easy to tune for multiple resonance frequency. The geometrical parameters of the coil are optimized to produce the most homogeneous rf field over a large volume. To avoid rf field distortion when the coil length is comparable to a quarter wavelength, a parallel PCOS coil is proposed and discussed. For testing rf coils and correcting B _1 in NMR experiments, a simple, rugged and accurate NMR rf field mapping technique has been developed. The method has been tested and used in 1D, 2D, 3D and in vivo rf mapping experiments. The method has been proven to be very useful in the design of rf coils. To preserve the linear relation between rf output applied on an rf coil and modulating input for an rf modulating -amplifying system of NMR imaging spectrometer, a quadrature feedback loop is employed in an rf modulator with two orthogonal rf channels to correct the amplitude and phase non-linearities caused by the rf components in the rf system. The modulator is very linear over a large range and it can generate an arbitrary rf shape. A diffusion imaging sequence has been developed for measuring and imaging diffusion in the presence of background gradients. Cross terms between the diffusion sensitizing gradients and background gradients or imaging gradients can complicate diffusion measurement and make the interpretation of NMR diffusion data ambiguous, but these have been eliminated in this method. Further, the background gradients has been measured and imaged. A dipole random distribution model has been established to study background magnetic fields Delta B and background magnetic gradients G_0 produced by small particles in a sample when it is in a B_0 field. From this model, the minimum distance that a spin can approach a particle can be determined by measuring and <{bf G}_sp{0 }{2}>. From this model, the particle concentration in a sample can be determined by measuring the lineshape of a free induction decay (fid).

  4. Comparison of ring artifact removal methods using flat panel detector based CT images

    PubMed Central

    2011-01-01

    Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411

  5. Appropriateness of the probability approach with a nutrient status biomarker to assess population inadequacy: a study using vitamin D123

    PubMed Central

    Carriquiry, Alicia L; Bailey, Regan L; Sempos, Christopher T; Yetley, Elizabeth A

    2013-01-01

    Background: There are questions about the appropriate method for the accurate estimation of the population prevalence of nutrient inadequacy on the basis of a biomarker of nutrient status (BNS). Objective: We determined the applicability of a statistical probability method to a BNS, specifically serum 25-hydroxyvitamin D [25(OH)D]. The ability to meet required statistical assumptions was the central focus. Design: Data on serum 25(OH)D concentrations in adults aged 19–70 y from the 2005–2006 NHANES were used (n = 3871). An Institute of Medicine report provided reference values. We analyzed key assumptions of symmetry, differences in variance, and the independence of distributions. We also corrected observed distributions for within-person variability (WPV). Estimates of vitamin D inadequacy were determined. Results: We showed that the BNS [serum 25(OH)D] met the criteria to use the method for the estimation of the prevalence of inadequacy. The difference between observations corrected compared with uncorrected for WPV was small for serum 25(OH)D but, nonetheless, showed enhanced accuracy because of correction. The method estimated a 19% prevalence of inadequacy in this sample, whereas misclassification inherent in the use of the more traditional 97.5th percentile high-end cutoff inflated the prevalence of inadequacy (36%). Conclusions: When the prevalence of nutrient inadequacy for a population is estimated by using serum 25(OH)D as an example of a BNS, a statistical probability method is appropriate and more accurate in comparison with a high-end cutoff. Contrary to a common misunderstanding, the method does not overlook segments of the population. The accuracy of population estimates of inadequacy is enhanced by the correction of observed measures for WPV. PMID:23097269

  6. Tracking of Ball and Players in Beach Volleyball Videos

    PubMed Central

    Gomez, Gabriel; Herrera López, Patricia; Link, Daniel; Eskofier, Bjoern

    2014-01-01

    This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points. PMID:25426936

  7. Three site Higgsless model at one loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chivukula, R. Sekhar; Simmons, Elizabeth H.; Matsuzaki, Shinya

    2007-04-01

    In this paper we compute the one loop chiral-logarithmic corrections to all O(p{sup 4}) counterterms in the three site Higgsless model. The calculation is performed using the background field method for both the chiral and gauge fields, and using Landau gauge for the quantum fluctuations of the gauge fields. The results agree with our previous calculations of the chiral-logarithmic corrections to the S and T parameters in 't Hooft-Feynman gauge. The work reported here includes a complete evaluation of all one loop divergences in an SU(2)xU(1) nonlinear sigma model, corresponding to an electroweak effective Lagrangian in the absence of custodialmore » symmetry.« less

  8. Inflight characterization and correction of Planck/HFI analog to digital converter nonlinearity

    NASA Astrophysics Data System (ADS)

    Sauvé, A.; Couchot, F.; Patanchon, G.; Montier, L.

    2016-07-01

    The Planck Satellite launched in 2009 was targeted to observe the anisotropies of the Cosmic Microwave Back-ground (CMB) to an unprecedented sensitivity. While the Analog to Digital Converter of the HFI (High Frequency Instrument) readout electronics had not been properly characterized on ground, it has been shown to add a systematic nonlinearity effect up to 2% of the cosmological signal. This was a limiting factor for CMB science at large angular scale. We will present the in-flight analysis and method used to characterize and correct this effect down to 0.05% level. We also discuss how to avoid this kind of complex issue for future missions.

  9. A novel baseline-correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo(a)pyrene in vegetable oil samples.

    PubMed

    Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun

    2011-07-07

    In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.

  10. A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.

    2011-11-02

    Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less

  11. Peculiar velocity measurement in a clumpy universe

    NASA Astrophysics Data System (ADS)

    Habibi, Farhang; Baghram, Shant; Tavasoli, Saeed

    Aims: In this work, we address the issue of peculiar velocity measurement in a perturbed Friedmann universe using the deviations from measured luminosity distances of standard candles from background FRW universe. We want to show and quantify the statement that in intermediate redshifts (0.5 < z < 2), deviations from the background FRW model are not uniquely governed by peculiar velocities. Luminosity distances are modified by gravitational lensing. We also want to indicate the importance of relativistic calculations for peculiar velocity measurement at all redshifts. Methods: For this task, we discuss the relativistic correction on luminosity distance and redshift measurement and show the contribution of each of the corrections as lensing term, peculiar velocity of the source and Sachs-Wolfe effect. Then, we use the SNe Ia sample of Union 2, to investigate the relativistic effects, we consider. Results: We show that, using the conventional peculiar velocity method, that ignores the lensing effect, will result in an overestimate of the measured peculiar velocities at intermediate redshifts. Here, we quantify this effect. We show that at low redshifts the lensing effect is negligible compare to the effect of peculiar velocity. From the observational point of view, we show that the uncertainties on luminosity of the present SNe Ia data prevent us from precise measuring the peculiar velocities even at low redshifts (z < 0.2).

  12. Corrections Officer Candidate Information Booklet and User's Manual. Standards and Training for Corrections Program.

    ERIC Educational Resources Information Center

    California State Board of Corrections, Sacramento.

    This package consists of an information booklet for job candidates preparing to take California's Corrections Officer Examination and a user's manual intended for those who will administer the examination. The candidate information booklet provides background information about the development of the Corrections Officer Examination, describes its…

  13. Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity

    ERIC Educational Resources Information Center

    Simmons-Mackie, Nina; Damico, Jack S.

    2008-01-01

    Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…

  14. An efficient empirical Bayes method for genomewide association studies.

    PubMed

    Wang, Q; Wei, J; Pan, Y; Xu, S

    2016-08-01

    Linear mixed model (LMM) is one of the most popular methods for genomewide association studies (GWAS). Numerous forms of LMM have been developed; however, there are two major issues in GWAS that have not been fully addressed before. The two issues are (i) the genomic background noise and (ii) low statistical power after Bonferroni correction. We proposed an empirical Bayes (EB) method by assigning each marker effect a normal prior distribution, resulting in shrinkage estimates of marker effects. We found that such a shrinkage approach can selectively shrink marker effects and reduce the noise level to zero for majority of non-associated markers. In the meantime, the EB method allows us to use an 'effective number of tests' to perform Bonferroni correction for multiple tests. Simulation studies for both human and pig data showed that EB method can significantly increase statistical power compared with the widely used exact GWAS methods, such as GEMMA and FaST-LMM-Select. Real data analyses in human breast cancer identified improved detection signals for markers previously known to be associated with breast cancer. We therefore believe that EB method is a valuable tool for identifying the genetic basis of complex traits. © 2015 Blackwell Verlag GmbH.

  15. The Shock Pulse Index and Its Application in the Fault Diagnosis of Rolling Element Bearings

    PubMed Central

    Sun, Peng; Liao, Yuhe; Lin, Jin

    2017-01-01

    The properties of the time domain parameters of vibration signals have been extensively studied for the fault diagnosis of rolling element bearings (REBs). Parameters like kurtosis and Envelope Harmonic-to-Noise Ratio are the most widely applied in this field and some important progress has been made. However, since only one-sided information is contained in these parameters, problems still exist in practice when the signals collected are of complicated structure and/or contaminated by strong background noises. A new parameter, named Shock Pulse Index (SPI), is proposed in this paper. It integrates the mutual advantages of both the parameters mentioned above and can help effectively identify fault-related impulse components under conditions of interference of strong background noises, unrelated harmonic components and random impulses. The SPI optimizes the parameters of Maximum Correlated Kurtosis Deconvolution (MCKD), which is used to filter the signals under consideration. Finally, the transient information of interest contained in the filtered signal can be highlighted through demodulation with the Teager Energy Operator (TEO). Fault-related impulse components can therefore be extracted accurately. Simulations show the SPI can correctly indicate the fault impulses under the influence of strong background noises, other harmonic components and aperiodic impulse and experiment analyses verify the effectiveness and correctness of the proposed method. PMID:28282883

  16. Bi-dimensional empirical mode decomposition based fringe-like pattern suppression in polarization interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Ren, Wenyi; Cao, Qizhi; Wu, Dan; Jiang, Jiangang; Yang, Guoan; Xie, Yingge; Wang, Guodong; Zhang, Sheqi

    2018-01-01

    Many observers using interference imaging spectrometer were plagued by the fringe-like pattern(FP) that occurs for optical wavelengths in red and near-infrared region. It brings us more difficulties in the data processing such as the spectrum calibration, information retrieval, and so on. An adaptive method based on the bi-dimensional empirical mode decomposition was developed to suppress the nonlinear FP in polarization interference imaging spectrometer. The FP and corrected interferogram were separated effectively. Meanwhile, the stripes introduced by CCD mosaic was suppressed. The nonlinear interferogram background removal and the spectrum distortion correction were implemented as well. It provides us an alternative method to adaptively suppress the nonlinear FP without prior experimental data and knowledge. This approach potentially is a powerful tool in the fields of Fourier transform spectroscopy, holographic imaging, optical measurement based on moire fringe, etc.

  17. [The validation of the effect of correcting spectral background changes based on floating reference method by simulation].

    PubMed

    Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin

    2015-02-01

    There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.

  18. MR-based attenuation correction methods for improved PET quantification in lesions within bone and susceptibility artifact regions.

    PubMed

    Bezrukov, Ilja; Schmidt, Holger; Mantlik, Frédéric; Schwenzer, Nina; Brendle, Cornelia; Schölkopf, Bernhard; Pichler, Bernd J

    2013-10-01

    Hybrid PET/MR systems have recently entered clinical practice. Thus, the accuracy of MR-based attenuation correction in simultaneously acquired data can now be investigated. We assessed the accuracy of 4 methods of MR-based attenuation correction in lesions within soft tissue, bone, and MR susceptibility artifacts: 2 segmentation-based methods (SEG1, provided by the manufacturer, and SEG2, a method with atlas-based susceptibility artifact correction); an atlas- and pattern recognition-based method (AT&PR), which also used artifact correction; and a new method combining AT&PR and SEG2 (SEG2wBONE). Attenuation maps were calculated for the PET/MR datasets of 10 patients acquired on a whole-body PET/MR system, allowing for simultaneous acquisition of PET and MR data. Eighty percent iso-contour volumes of interest were placed on lesions in soft tissue (n = 21), in bone (n = 20), near bone (n = 19), and within or near MR susceptibility artifacts (n = 9). Relative mean volume-of-interest differences were calculated with CT-based attenuation correction as a reference. For soft-tissue lesions, none of the methods revealed a significant difference in PET standardized uptake value relative to CT-based attenuation correction (SEG1, -2.6% ± 5.8%; SEG2, -1.6% ± 4.9%; AT&PR, -4.7% ± 6.5%; SEG2wBONE, 0.2% ± 5.3%). For bone lesions, underestimation of PET standardized uptake values was found for all methods, with minimized error for the atlas-based approaches (SEG1, -16.1% ± 9.7%; SEG2, -11.0% ± 6.7%; AT&PR, -6.6% ± 5.0%; SEG2wBONE, -4.7% ± 4.4%). For lesions near bone, underestimations of lower magnitude were observed (SEG1, -12.0% ± 7.4%; SEG2, -9.2% ± 6.5%; AT&PR, -4.6% ± 7.8%; SEG2wBONE, -4.2% ± 6.2%). For lesions affected by MR susceptibility artifacts, quantification errors could be reduced using the atlas-based artifact correction (SEG1, -54.0% ± 38.4%; SEG2, -15.0% ± 12.2%; AT&PR, -4.1% ± 11.2%; SEG2wBONE, 0.6% ± 11.1%). For soft-tissue lesions, none of the evaluated methods showed statistically significant errors. For bone lesions, significant underestimations of -16% and -11% occurred for methods in which bone tissue was ignored (SEG1 and SEG2). In the present attenuation correction schemes, uncorrected MR susceptibility artifacts typically result in reduced attenuation values, potentially leading to highly reduced PET standardized uptake values, rendering lesions indistinguishable from background. While AT&PR and SEG2wBONE show accurate results in both soft tissue and bone, SEG2wBONE uses a two-step approach for tissue classification, which increases the robustness of prediction and can be applied retrospectively if more precision in bone areas is needed.

  19. Seasonal changes in background levels of deuterium and oxygen-18 prove water drinking by harp seals, which affects the use of the doubly labelled water method.

    PubMed

    Nordøy, Erling S; Lager, Anne R; Schots, Pauke C

    2017-12-01

    The aim of this study was to monitor seasonal changes in stable isotopes of pool freshwater and harp seal ( Phoca groenlandica ) body water, and to study whether these potential seasonal changes might bias results obtained using the doubly labelled water (DLW) method when measuring energy expenditure in animals with access to freshwater. Seasonal changes in the background levels of deuterium and oxygen-18 in the body water of four captive harp seals and in the freshwater pool in which they were kept were measured over a time period of 1 year. The seals were offered daily amounts of capelin and kept under a seasonal photoperiod of 69°N. Large seasonal variations of deuterium and oxygen-18 in the pool water were measured, and the isotope abundance in the body water showed similar seasonal changes to the pool water. This shows that the seals were continuously equilibrating with the surrounding water as a result of significant daily water drinking. Variations in background levels of deuterium and oxygen-18 in freshwater sources may be due to seasonal changes in physical processes such as precipitation and evaporation that cause fractionation of isotopes. Rapid and abrupt changes in the background levels of deuterium and oxygen-18 may complicate calculation of energy expenditure by use of the DLW method. It is therefore strongly recommended that analysis of seasonal changes in background levels of isotopes is performed before the DLW method is applied on (free-ranging) animals, and to use a control group in order to correct for changes in background levels. © 2017. Published by The Company of Biologists Ltd.

  20. The perception of isoluminant coloured stimuli of amblyopic eye and defocused eye

    NASA Astrophysics Data System (ADS)

    Krumina, Gunta; Ozolinsh, Maris; Ikaunieks, Gatis

    2008-09-01

    In routine eye examination the visual acuity usually is determined using standard charts with black letters on a white background, however contrast and colour are important characteristics of visual perception. The purpose of research was to study the perception of isoluminant coloured stimuli in the cases of true and simulated amlyopia. We estimated difference in visual acuity with isoluminant coloured stimuli comparing to that for high contrast black-white stimuli for true amblyopia and simulated amblyopia. Tests were generated on computer screen. Visual acuity was detected using different charts in two ways: standard achromatic stimuli (black symbols on a white background) and isoluminant coloured stimuli (white symbols on a yellow background, grey symbols on blue, green or red background). Thus isoluminant tests had colour contrast only but had no luminance contrast. Visual acuity evaluated with the standard method and colour tests were studied for subjects with good visual acuity, if necessary using the best vision correction. The same was performed for subjects with defocused eye and with true amblyopia. Defocus was realized with optical lenses placed in front of the normal eye. The obtained results applying the isoluminant colour charts revealed worsening of the visual acuity comparing with the visual acuity estimated with a standard high contrast method (black symbols on a white background).

  1. High throughput film dosimetry in homogeneous and heterogeneous media for a small animal irradiator

    PubMed Central

    Wack, L.; Ngwa, W.; Tryggestad, E.; Tsiamas, P.; Berbeco, R.; Ng, S.K.; Hesser, J.

    2013-01-01

    Purpose We have established a high-throughput Gafchromic film dosimetry protocol for narrow kilo-voltage beams in homogeneous and heterogeneous media for small-animal radiotherapy applications. The kV beam characterization is based on extensive Gafchromic film dosimetry data acquired in homogeneous and heterogeneous media. An empirical model is used for parameterization of depth and off-axis dependence of measured data. Methods We have modified previously published methods of film dosimetry to suit the specific tasks of the study. Unlike film protocols used in previous studies, our protocol employs simultaneous multichannel scanning and analysis of up to nine Gafchromic films per scan. A scanner and background correction were implemented to improve accuracy of the measurements. Measurements were taken in homogeneous and inhomogeneous phantoms at 220 kVp and a field size of 5 × 5 mm2. The results were compared against Monte Carlo simulations. Results Dose differences caused by variations in background signal were effectively removed by the corrections applied. Measurements in homogeneous phantoms were used to empirically characterize beam data in homogeneous and heterogeneous media. Film measurements in inhomogeneous phantoms and their empirical parameterization differed by about 2%–3%. The model differed from MC by about 1% (water, lung) to 7% (bone). Good agreement was found for measured and modelled off-axis ratios. Conclusions EBT2 films are a valuable tool for characterization of narrow kV beams, though care must be taken to eliminate disturbances caused by varying background signals. The usefulness of the empirical beam model in interpretation and parameterization of film data was demonstrated. PMID:23510532

  2. Parallel Low-Loss Measurement of Multiple Atomic Qubits

    NASA Astrophysics Data System (ADS)

    Kwon, Minho; Ebert, Matthew F.; Walker, Thad G.; Saffman, M.

    2017-11-01

    We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is <2 % and the initial hyperfine state is preserved with >98 % probability.

  3. Semi-supervised anomaly detection - towards model-independent searches of new physics

    NASA Astrophysics Data System (ADS)

    Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu

    2012-06-01

    Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.

  4. B1- non-uniformity correction of phased-array coils without measuring coil sensitivity.

    PubMed

    Damen, Frederick C; Cai, Kejia

    2018-04-18

    Parallel imaging can be used to increase SNR and shorten acquisition times, albeit, at the cost of image non-uniformity. B 1 - non-uniformity correction techniques are confounded by signal that varies not only due to coil induced B 1 - sensitivity variation, but also the object's own intrinsic signal. Herein, we propose a method that makes minimal assumptions and uses only the coil images themselves to produce a single combined B 1 - non-uniformity-corrected complex image with the highest available SNR. A novel background noise classifier is used to select voxels of sufficient quality to avoid the need for regularization. Unique properties of the magnitude and phase were used to reduce the B 1 - sensitivity to two joint additive models for estimation of the B 1 - inhomogeneity. The complementary corruption of the imaged object across the coil images is used to abate individual coil correction imperfections. Results are presented from two anatomical cases: (a) an abdominal image that is challenging in both extreme B 1 - sensitivity and intrinsic tissue signal variation, and (b) a brain image with moderate B 1 - sensitivity and intrinsic tissue signal variation. A new relative Signal-to-Noise Ratio (rSNR) quality metric is proposed to evaluate the performance of the proposed method and the RF receiving coil array. The proposed method has been shown to be robust to imaged objects with widely inhomogeneous intrinsic signal, and resilient to poorly performing coil elements. Copyright © 2018. Published by Elsevier Inc.

  5. --No Title--

    Science.gov Websites

    2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias

  6. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    PubMed

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  7. ON THE PROPER USE OF THE REDUCED SPEED OF LIGHT APPROXIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov

    I show that the reduced speed of light (RSL) approximation, when used properly (i.e., as originally designed—only for local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the “Cosmic Reionization on Computers” project are insensitive to the adopted value of the RSL for as long as that value does not fall below about 10% of the true speed of light. A recent claim of the failure of the RSL approximation in the Illustris reionization model appears to be due to the effective speed ofmore » light being reduced in the equation for the cosmic background too and hence illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less

  8. Inverse probability weighting in STI/HIV prevention research: methods for evaluating social and community interventions

    PubMed Central

    Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.

    2011-01-01

    Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927

  9. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  10. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    PubMed Central

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  11. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  12. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  13. Reducing respiratory motion artifacts in positron emission tomography through retrospective stacking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorndyke, Brian; Schreibmann, Eduard; Koong, Albert

    Respiratory motion artifacts in positron emission tomography (PET) imaging can alter lesion intensity profiles, and result in substantially reduced activity and contrast-to-noise ratios (CNRs). We propose a corrective algorithm, coined 'retrospective stacking' (RS), to restore image quality without requiring additional scan time. Retrospective stacking uses b-spline deformable image registration to combine amplitude-binned PET data along the entire respiratory cycle into a single respiratory end point. We applied the method to a phantom model consisting of a small, hot vial oscillating within a warm background, as well as to {sup 18}FDG-PET images of a pancreatic and a liver patient. Comparisons weremore » made using cross-section visualizations, activity profiles, and CNRs within the region of interest. Retrospective stacking was found to properly restore the lesion location and intensity profile in all cases. In addition, RS provided CNR improvements up to three-fold over gated images, and up to five-fold over ungated data. These phantom and patient studies demonstrate that RS can correct for lesion motion and deformation, while substantially improving tumor visibility and background noise.« less

  14. Background of SAM atom-fraction profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ernst, Frank

    Atom-fraction profiles acquired by SAM (scanning Auger microprobe) have important applications, e.g. in the context of alloy surface engineering by infusion of carbon or nitrogen through the alloy surface. However, such profiles often exhibit an artifact in form of a background with a level that anti-correlates with the local atom fraction. This article presents a theory explaining this phenomenon as a consequence of the way in which random noise in the spectrum propagates into the discretized differentiated spectrum that is used for quantification. The resulting model of “energy channel statistics” leads to a useful semi-quantitative background reduction procedure, which ismore » validated by applying it to simulated data. Subsequently, the procedure is applied to an example of experimental SAM data. The analysis leads to conclusions regarding optimum experimental acquisition conditions. The proposed method of background reduction is based on general principles and should be useful for a broad variety of applications. - Highlights: • Atom-fraction–depth profiles of carbon measured by scanning Auger microprobe • Strong background, varies with local carbon concentration. • Needs correction e.g. for quantitative comparison with simulations • Quantitative theory explains background. • Provides background removal strategy and practical advice for acquisition.« less

  15. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  16. Publisher Correction: Cluster richness-mass calibration with cosmic microwave background lensing

    NASA Astrophysics Data System (ADS)

    Geach, James E.; Peacock, John A.

    2018-03-01

    Owing to a technical error, the `Additional information' section of the originally published PDF version of this Letter incorrectly gave J.A.P. as the corresponding author; it should have read J.E.G. This has now been corrected. The HTML version is correct.

  17. A comprehensive numerical analysis of background phase correction with V-SHARP.

    PubMed

    Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand

    2017-04-01

    Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  19. Spinorial Geometry and Supergravity

    NASA Astrophysics Data System (ADS)

    Gillard, Joe

    2006-08-01

    In the main part of this thesis, we present the foundations and initial results of the Spinorial Geometry formalism for solving Killing spinor equations. This method can be used for any supergravity theory, although we largely focus on D=11 supergravity. The D=5 case is investigated in an appendix. The exposition provides a comprehensive introduction to the formalism, and contains background material on the complex spin representations which, it is hoped, will provide a useful bridge between the mathematical literature and our methods. Many solutions to the D=11 Killing spinor equations are presented, and the consequences for the spacetime geometry are explored in each case. Also in this thesis, we consider another class of supergravity solutions, namely heterotic string backgrounds with (2,0) world-sheet supersymmetry. We investigate the consequences of taking alpha-prime corrections into account in the field equations, in order to remain consistent with anomaly cancellation, while requiring that spacetime supersymmetry is preserved.

  20. Separation of organic cations using novel background electrolytes by capillary electrophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, S.; Fritz, J.

    2008-02-12

    A background electrolyte for capillary electrophoresis containing tris(-hydroxymethyl) aminomethane (THAM) and ethanesulfonic acid (ESA) gives excellent efficiency for separation of drug cations with actual theoretical plate numbers as high as 300,000. However, the analyte cations often elute too quickly and consequently offer only a narrow window for separation. The best way to correct this is to induce a reverse electroosmotic flow (EOF) that will spread out the peaks by slowing their migration rates, but this has always been difficult to accomplish in a controlled manner. A new method for producing a variable EOF is described in which a low variablemore » concentration of tributylammonium- or triethylammonium ESA is added to the BGE. The additive equilibrates with the capillary wall to give it a positive charge and thereby produce a controlled opposing EOF. Excellent separations of complex drug mixtures were obtained by this method.« less

  1. SHAPE Selection (SHAPES) enrich for RNA structure signal in SHAPE sequencing-based probing data

    PubMed Central

    Poulsen, Line Dahl; Kielpinski, Lukasz Jan; Salama, Sofie R.; Krogh, Anders; Vinther, Jeppe

    2015-01-01

    Selective 2′ Hydroxyl Acylation analyzed by Primer Extension (SHAPE) is an accurate method for probing of RNA secondary structure. In existing SHAPE methods, the SHAPE probing signal is normalized to a no-reagent control to correct for the background caused by premature termination of the reverse transcriptase. Here, we introduce a SHAPE Selection (SHAPES) reagent, N-propanone isatoic anhydride (NPIA), which retains the ability of SHAPE reagents to accurately probe RNA structure, but also allows covalent coupling between the SHAPES reagent and a biotin molecule. We demonstrate that SHAPES-based selection of cDNA–RNA hybrids on streptavidin beads effectively removes the large majority of background signal present in SHAPE probing data and that sequencing-based SHAPES data contain the same amount of RNA structure data as regular sequencing-based SHAPE data obtained through normalization to a no-reagent control. Moreover, the selection efficiently enriches for probed RNAs, suggesting that the SHAPES strategy will be useful for applications with high-background and low-probing signal such as in vivo RNA structure probing. PMID:25805860

  2. Recursive least squares background prediction of univariate syndromic surveillance data.

    PubMed

    Najmi, Amir-Homayoon; Burkom, Howard

    2009-01-16

    Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We compare the detection performance of this algorithm to the W2 method recently implemented at CDC. The modified RLS method gives consistently better sensitivity at multiple background alert rates, and we recommend that it should be considered for routine application in bio-surveillance systems.

  3. 76 FR 56949 - Biomass Crop Assistance Program; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    .... ACTION: Interim rule; correction. SUMMARY: The Commodity Credit Corporation (CCC) is amending the Biomass... funds in favor of the ``project area'' portion of BCAP. CCC is also correcting errors in the regulation... INFORMATION: Background CCC published a final rule on October 27, 2010 (75 FR 66202-66243) implementing BCAP...

  4. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  5. Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction

    PubMed Central

    Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang

    2016-01-01

    Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638

  6. Cause-specific mortality time series analysis: a general method to detect and correct for abrupt data production changes

    PubMed Central

    2011-01-01

    Background Monitoring the time course of mortality by cause is a key public health issue. However, several mortality data production changes may affect cause-specific time trends, thus altering the interpretation. This paper proposes a statistical method that detects abrupt changes ("jumps") and estimates correction factors that may be used for further analysis. Methods The method was applied to a subset of the AMIEHS (Avoidable Mortality in the European Union, toward better Indicators for the Effectiveness of Health Systems) project mortality database and considered for six European countries and 13 selected causes of deaths. For each country and cause of death, an automated jump detection method called Polydect was applied to the log mortality rate time series. The plausibility of a data production change associated with each detected jump was evaluated through literature search or feedback obtained from the national data producers. For each plausible jump position, the statistical significance of the between-age and between-gender jump amplitude heterogeneity was evaluated by means of a generalized additive regression model, and correction factors were deduced from the results. Results Forty-nine jumps were detected by the Polydect method from 1970 to 2005. Most of the detected jumps were found to be plausible. The age- and gender-specific amplitudes of the jumps were estimated when they were statistically heterogeneous, and they showed greater by-age heterogeneity than by-gender heterogeneity. Conclusion The method presented in this paper was successfully applied to a large set of causes of death and countries. The method appears to be an alternative to bridge coding methods when the latter are not systematically implemented because they are time- and resource-consuming. PMID:21929756

  7. A simple and low cost dual-wavelength β-correction spectrophotometric determination and speciation of mercury(II) in water using chromogenic reagent 4-(2-thiazolylazo) resorcinol

    NASA Astrophysics Data System (ADS)

    Al-Bagawi, A. H.; Ahmad, W.; Saigl, Z. M.; Alwael, H.; Al-Harbi, E. A.; El-Shahawi, M. S.

    2017-12-01

    The most common problems in spectrophotometric determination of various complex species originate from the background spectral interference. Thus, the present study aimed to overcome the spectral matrix interference for the precise analysis and speciation of mercury(II) in water by dual-wavelength β-correction spectrophotometry using 4-(2-thiazolylazo) resorcinol (TAR) as chromogenic reagent. The principle was based on measuring the correct absorbance for the formed complex of mercury(II) ions with TAR reagent at 547 nm (lambda max). Under optimized conditions, a linear dynamic range of 0.1-2.0 μg mL- 1 with correlation coefficient (R2) of 0.997 were obtained with lower limits of detection (LOD) of 0.024 μg mL- 1 and limit of quantification (LOQ) of 0.081 μg mL- 1. The values of RSD and relative error (RE) obtained for β-correction method and single wavelength spectrophotometry were 1.3, 1.32% and 4.7, 5.9%, respectively. The method was validated in tap and sea water in terms of the data obtained from inductively coupled plasma-optical emission spectrometry (ICP-OES) using student's t and F tests. The developed methodology satisfactorily overcomes the spectral interference in trace determination and speciation of mercury(II) ions in water.

  8. MRI image plane nonuniformity in evaluation of ferrous sulphate dosimeter gel (FeGel) by means of T1-relaxation time.

    PubMed

    Magnusson, P; Bäck, S A; Olsson, L E

    1999-11-01

    MR image nonuniformity can vary significantly with the spin-echo pulse sequence repetition time. When MR images with different nonuniformity shapes are used in a T1-calculation the resulting T1-image becomes nonuniform. As shown in this work the uniformity TR-dependence of the spin-echo pulse sequence is a critical property for T1 measurements in general and for ferrous sulfate dosimeter gel (FeGel) applications in particular. The purpose was to study the characteristics of the MR image plane nonuniformity in FeGel evaluation. This included studies of the possibility of decreasing nonuniformities by selecting uniformity optimized repetition times, studies of the transmitted and received RF-fields and studies of the effectiveness of the correction methods background subtraction and quotient correction. A pronounced MR image nonuniformity variation with repetition and T1 relaxation time was observed, and was found to originate from nonuniform RF-transmission in combination with the inherent differences in T1 relaxation for different repetition times. The T1 calculation itself, the uniformity optimized repetition times, nor none of the correction methods studied could sufficiently correct the nonuniformities observed in the T1 images. The nonuniformities were found to vary considerably less with inversion time for the inversion-recovery pulse sequence, than with repetition time for the spin-echo pulse sequence, resulting in considerably lower T1 image nonuniformity levels.

  9. Meta-analysis of alcohol price and income elasticities – with corrections for publication bias

    PubMed Central

    2013-01-01

    Background This paper contributes to the evidence-base on prices and alcohol use by presenting meta-analytic summaries of price and income elasticities for alcohol beverages. The analysis improves on previous meta-analyses by correcting for outliers and publication bias. Methods Adjusting for outliers is important to avoid assigning too much weight to studies with very small standard errors or large effect sizes. Trimmed samples are used for this purpose. Correcting for publication bias is important to avoid giving too much weight to studies that reflect selection by investigators or others involved with publication processes. Cumulative meta-analysis is proposed as a method to avoid or reduce publication bias, resulting in more robust estimates. The literature search obtained 182 primary studies for aggregate alcohol consumption, which exceeds the database used in previous reviews and meta-analyses. Results For individual beverages, corrected price elasticities are smaller (less elastic) by 28-29 percent compared with consensus averages frequently used for alcohol beverages. The average price and income elasticities are: beer, -0.30 and 0.50; wine, -0.45 and 1.00; and spirits, -0.55 and 1.00. For total alcohol, the price elasticity is -0.50 and the income elasticity is 0.60. Conclusions These new results imply that attempts to reduce alcohol consumption through price or tax increases will be less effective or more costly than previously claimed. PMID:23883547

  10. Does Human Milk Modulate Body Composition in Late Preterm Infants at Term-Corrected Age?

    PubMed

    Giannì, Maria Lorella; Consonni, Dario; Liotto, Nadia; Roggero, Paola; Morlacchi, Laura; Piemontese, Pasqua; Menis, Camilla; Mosca, Fabio

    2016-10-23

    (1) Background: Late preterm infants account for the majority of preterm births and are at risk of altered body composition. Because body composition modulates later health outcomes and human milk is recommended as the normal method for infant feeding, we sought to investigate whether human milk feeding in early life can modulate body composition development in late preterm infants; (2) Methods: Neonatal, anthropometric and feeding data of 284 late preterm infants were collected. Body composition was evaluated at term-corrected age by air displacement plethysmography. The effect of human milk feeding on fat-free mass and fat mass content was evaluated using multiple linear regression analysis; (3) Results: Human milk was fed to 68% of the infants. According to multiple regression analysis, being fed any human milk at discharge and at  term-corrected and being fed exclusively human milk at term-corrected age were positively associated with fat-free mass content(β = -47.9, 95% confidence interval (CI) = -95.7; -0.18; p = 0.049; β = -89.6, 95% CI = -131.5; -47.7; p < 0.0001; β = -104.1, 95% CI = -151.4; -56.7, p < 0.0001); (4) Conclusion: Human milk feeding appears to be associated with fat-free mass deposition in late preterm infants. Healthcare professionals should direct efforts toward promoting and supporting breastfeeding in these vulnerable infants.

  11. Context-Sensitive Spelling Correction of Consumer-Generated Content on Health Care

    PubMed Central

    Chen, Rudan; Zhao, Xianyang; Xu, Wei; Cheng, Wenqing; Lin, Simon

    2015-01-01

    Background Consumer-generated content, such as postings on social media websites, can serve as an ideal source of information for studying health care from a consumer’s perspective. However, consumer-generated content on health care topics often contains spelling errors, which, if not corrected, will be obstacles for downstream computer-based text analysis. Objective In this study, we proposed a framework with a spelling correction system designed for consumer-generated content and a novel ontology-based evaluation system which was used to efficiently assess the correction quality. Additionally, we emphasized the importance of context sensitivity in the correction process, and demonstrated why correction methods designed for electronic medical records (EMRs) failed to perform well with consumer-generated content. Methods First, we developed our spelling correction system based on Google Spell Checker. The system processed postings acquired from MedHelp, a biomedical bulletin board system (BBS), and saved misspelled words (eg, sertaline) and corresponding corrected words (eg, sertraline) into two separate sets. Second, to reduce the number of words needing manual examination in the evaluation process, we respectively matched the words in the two sets with terms in two biomedical ontologies: RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT). The ratio of words which could be matched and appropriately corrected was used to evaluate the correction system’s overall performance. Third, we categorized the misspelled words according to the types of spelling errors. Finally, we calculated the ratio of abbreviations in the postings, which remarkably differed between EMRs and consumer-generated content and could largely influence the overall performance of spelling checkers. Results An uncorrected word and the corresponding corrected word was called a spelling pair, and the two words in the spelling pair were its members. In our study, there were 271 spelling pairs detected, among which 58 (21.4%) pairs had one or two members matched in the selected ontologies. The ratio of appropriate correction in the 271 overall spelling errors was 85.2% (231/271). The ratio of that in the 58 spelling pairs was 86% (50/58), close to the overall ratio. We also found that linguistic errors took up 31.4% (85/271) of all errors detected, and only 0.98% (210/21,358) of words in the postings were abbreviations, which was much lower than the ratio in the EMRs (33.6%). Conclusions We conclude that our system can accurately correct spelling errors in consumer-generated content. Context sensitivity is indispensable in the correction process. Additionally, it can be confirmed that consumer-generated content differs from EMRs in that consumers seldom use abbreviations. Also, the evaluation method, taking advantage of biomedical ontology, can effectively estimate the accuracy of the correction system and reduce manual examination time. PMID:26232246

  12. On the proper use of the reduced speed of light approximation

    DOE PAGES

    Gnedin, Nickolay Y.

    2016-12-07

    I show that the Reduced Speed of Light (RSL) approximation, when used properly (i.e. as originally designed - only for the local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the "Cosmic Reionization On Computers" (CROC) project are insensitive to the adopted value of the reduced speed of light for as long as that value does not fall below about 10% of the true speed of light. Here, a recent claim of the failure of the RSL approximation in the Illustris reionization model appearsmore » to be due to the effective speed of light being reduced in the equation for the cosmic background too, and, hence, illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less

  13. On the proper use of the reduced speed of light approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.

    I show that the Reduced Speed of Light (RSL) approximation, when used properly (i.e. as originally designed - only for the local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the "Cosmic Reionization On Computers" (CROC) project are insensitive to the adopted value of the reduced speed of light for as long as that value does not fall below about 10% of the true speed of light. Here, a recent claim of the failure of the RSL approximation in the Illustris reionization model appearsmore » to be due to the effective speed of light being reduced in the equation for the cosmic background too, and, hence, illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less

  14. Atmospheric turbulence compensation with laser phase shifting interferometry

    NASA Astrophysics Data System (ADS)

    Rabien, S.; Eisenhauer, F.; Genzel, R.; Davies, R. I.; Ott, T.

    2006-04-01

    Laser guide stars with adaptive optics allow astronomical image correction in the absence of a natural guide star. Single guide star systems with a star created in the earth's sodium layer can be used to correct the wavefront in the near infrared spectral regime for 8-m class telescopes. For possible future telescopes of larger sizes, or for correction at shorter wavelengths, the use of a single guide star is ultimately limited by focal anisoplanatism that arises from the finite height of the guide star. To overcome this limitation we propose to overlap coherently pulsed laser beams that are expanded over the full aperture of the telescope, traveling upwards along the same path which light from the astronomical object travels downwards. Imaging the scattered light from the resultant interference pattern with a camera gated to a certain height above the telescope, and using phase shifting interferometry we have found a method to retrieve the local wavefront gradients. By sensing the backscattered light from two different heights, one can fully remove the cone effect, which can otherwise be a serious handicap to the use of laser guide stars at shorter wavelengths or on larger telescopes. Using two laser beams multiconjugate correction is possible, resulting in larger corrected fields. With a proper choice of laser, wavefront correction could be expanded to the visible regime and, due to the lack of a cone effect, the method is applicable to any size of telescope. Finally the position of the laser spot could be imaged from the side of the main telescope against a bright background star to retrieve tip-tilt information, which would greatly improve the sky coverage of the system.

  15. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  16. The lightest supersymmetric particle and the extragalactic gamma-ray background

    NASA Technical Reports Server (NTRS)

    Gao, Yi-Tian; Stecker, Floyd W.; Cline, David B.

    1991-01-01

    The possibility that cosmological photino annihilation is caused by the extragalactic gamma-ray background (EGB) is examined with particular attention given to the lightest supersymmetric particle (LSP). The LSP is considered a general type of the best-motivated candidates for cosmic dark matter (CDM). The theoretical analysis employs a corrected assumption for the annihilation cross section, and cosmological integrations are performed through the early phases of the universe. Romberg's method is used for numerical integration, and the total optical depth is developed for the gamma-ray region. The computed LSP-type annihilation fluxes are found to be negligible when compared to the total EGB observed, suggesting that the LSP candidates for CDM are not significant contributors to the EGB.

  17. Process Evaluation of Two Participatory Approaches: Implementing Total Worker Health® Interventions in a Correctional Workforce

    PubMed Central

    Dugan, Alicia G.; Farr, Dana A.; Namazi, Sara; Henning, Robert A.; Wallace, Kelly N.; El Ghaziri, Mazen; Punnett, Laura; Dussetschleger, Jeffrey L.; Cherniack, Martin G.

    2018-01-01

    Background Correctional Officers (COs) have among the highest injury rates and poorest health of all the public safety occupations. The HITEC-2 (Health Improvement Through Employee Control-2) study uses Participatory Action Research (PAR) to design and implement interventions to improve health and safety of COs. Method HITEC-2 compared two different types of participatory program, a CO-only “Design Team” (DT) and “Kaizen Event Teams” (KET) of COs and supervisors, to determine differences in implementation process and outcomes. The Program Evaluation Rating Sheet (PERS) was developed to document and evaluate program implementation. Results Both programs yielded successful and unsuccessful interventions, dependent upon team-, facility-, organizational, state-, facilitator-, and intervention-level factors. Conclusions PAR in corrections, and possibly other sectors, depends upon factors including participation, leadership, continuity and timing, resilience, and financial circumstances. The new PERS instrument may be useful in other sectors to assist in assessing intervention success. PMID:27378470

  18. Pregnancy and Parenting Support for Incarcerated Women: Lessons Learned

    PubMed Central

    Shlafer, Rebecca J.; Gerrity, Erica; Duwe, Grant

    2017-01-01

    Background There are more than 200,000 incarcerated women in U.S. prisons and jails, and it is estimated that 6% to 10% are pregnant. Pregnant incarcerated women experience complex risks that can compromise their health and the health of their offspring. Objectives Identify lessons learned from a community–university pilot study of a prison-based pregnancy and parenting support program. Methods A community–university–corrections partnership was formed to provide education and support to pregnant incarcerated women through a prison-based pilot program. Evaluation data assessed women’s physical and mental health concerns and satisfaction with the program. Between October 2011 and December 2012, 48 women participated. Lessons Learned We learned that providing services for pregnant incarcerated women requires an effective partnership with the Department of Corrections, adaptations to traditional community-based participatory research (CBPR) approaches, and resources that support both direct service and ongoing evaluation. Conclusions Effective services for pregnant incarcerated women can be provided through a successful community– university–corrections partnership. PMID:26548788

  19. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  20. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  1. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  2. Born iterative reconstruction using perturbed-phase field estimates.

    PubMed

    Astheimer, Jeffrey P; Waag, Robert C

    2008-10-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.

  3. A critique of recent economic evaluations of community water fluoridation

    PubMed Central

    Ko, Lee; Thiessen, Kathleen M

    2015-01-01

    Background: Although community water fluoridation (CWF) results in a range of potential contaminant exposures, little attention has been given to many of the possible impacts. A central argument for CWF is its cost-effectiveness. The U.S. Government states that $1 spent on CWF saves $38 in dental treatment costs. Objective: To examine the reported cost-effectiveness of CWF. Methods: Methods and underlying data from the primary U.S. economic evaluation of CWF are analyzed and corrected calculations are described. Other recent economic evaluations are also examined. Results: Recent economic evaluations of CWF contain defective estimations of both costs and benefits. Incorrect handling of dental treatment costs and flawed estimates of effectiveness lead to overestimated benefits. The real-world costs to water treatment plants and communities are not reflected. Conclusions: Minimal correction reduced the savings to $3 per person per year (PPPY) for a best-case scenario, but this savings is eliminated by the estimated cost of treating dental fluorosis. PMID:25471729

  4. Real-time text extraction based on the page layout analysis system

    NASA Astrophysics Data System (ADS)

    Soua, M.; Benchekroun, A.; Kachouri, R.; Akil, M.

    2017-05-01

    Several approaches were proposed in order to extract text from scanned documents. However, text extraction in heterogeneous documents stills a real challenge. Indeed, text extraction in this context is a difficult task because of the variation of the text due to the differences of sizes, styles and orientations, as well as to the complexity of the document region background. Recently, we have proposed the improved hybrid binarization based on Kmeans method (I-HBK)5 to extract suitably the text from heterogeneous documents. In this method, the Page Layout Analysis (PLA), part of the Tesseract OCR engine, is used to identify text and image regions. Afterwards our hybrid binarization is applied separately on each kind of regions. In one side, gamma correction is employed before to process image regions. In the other side, binarization is performed directly on text regions. Then, a foreground and background color study is performed to correct inverted region colors. Finally, characters are located from the binarized regions based on the PLA algorithm. In this work, we extend the integration of the PLA algorithm within the I-HBK method. In addition, to speed up the separation of text and image step, we employ an efficient GPU acceleration. Through the performed experiments, we demonstrate the high F-measure accuracy of the PLA algorithm reaching 95% on the LRDE dataset. In addition, we illustrate the sequential and the parallel compared PLA versions. The obtained results give a speedup of 3.7x when comparing the parallel PLA implementation on GPU GTX 660 to the CPU version.

  5. Holographic corrections to meson scattering amplitudes

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-06-01

    We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.

  6. Improved tomographic reconstructions using adaptive time-dependent intensity normalization.

    PubMed

    Titarenko, Valeriy; Titarenko, Sofya; Withers, Philip J; De Carlo, Francesco; Xiao, Xianghui

    2010-09-01

    The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography.

  7. Using Visual Displays to Communicate Risk of Cancer to Women From Diverse Race/Ethnic Backgrounds

    PubMed Central

    Wong, Sabrina T.; Pérez-Stable, Eliseo J.; Kim, Sue E.; Gregorich, Steven E.; Sawaya, George F.; Walsh, Judith M. E.; Washington, A. Eugene; Kaplan, Celia P.

    2012-01-01

    Objective This study evaluated how well women from diverse race/ethnic groups were able to take a quantitative cancer risk statistic verbally provided to them and report it in a visual format. Methods Cross-sectional survey was administered in English, Spanish or Chinese, to women aged 50 to 80 (n=1,160), recruited from primary care practices. The survey contained breast, colorectal or cervical cancer questions regarding screening and prevention. Women were told cancer-specific lifetime risk then shown a visual display of risk and asked to indicate the specific lifetime risk. Correct indication of risk was the main outcome. Results Correct responses on icon arrays were 46% for breast, 55% for colon, and 44% for cervical; only 25% correctly responded to a magnifying glass graphic. Compared to Whites, African American and Latina women were significantly less likely to use the icon arrays correctly. Higher education and higher numeracy were associated with correct responses. Lower education was associated with lower numeracy. Conclusions Race/Ethnic differences were associated with women’s ability to take a quantitative cancer risk statistic verbally provided to them and report it in a visual format. Practice Implications Systematically considering the complexity of intersecting factors such as race/ethnicity, educational level, poverty, and numeracy in most health communications is needed. (200) PMID:22244322

  8. A Voice Enabled Procedure Browser for the International Space Station

    NASA Technical Reports Server (NTRS)

    Rayner, Manny; Chatzichrisafis, Nikos; Hockey, Beth Ann; Farrell, Kim; Renders, Jean-Michel

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station (ISS), is to the best of our knowledge the first spoken dialog system in space. This paper gives background on the system and the ISS procedures, then discusses the research developed to address three key problems: grammar-based speech recognition using the Regulus toolkit; SVM based methods for open microphone speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations.

  9. ERRATUM: High-resolution electron spectroscopy of the 1s23lnl' Be-like series in oxygen and neon. Test of theoretical data: I. Experimental method and theoretical background

    NASA Astrophysics Data System (ADS)

    Bordenave-Montesquieu, A.; Moretto-Capelle, P.; Bordenave-Montesquieu, D.

    2003-02-01

    The J. Phys. B publishing team would like to apologize to the authors of the above paper. In this paper, references [42] and [43] were printed incorrectly. The correct references are: [42] Bordenave-Montesquieu A, Gleizes A and Benoit-Cattin P 1982 Phys. Rev. A 25 245-67 [43] Bordenave-Montesquieu A et al 1987 J. Phys. B: At. Mol. Phys. 20 L695-703.

  10. Processing method of images obtained during the TESIS/CORONAS-PHOTON experiment

    NASA Astrophysics Data System (ADS)

    Kuzin, S. V.; Shestov, S. V.; Bogachev, S. A.; Pertsov, A. A.; Ulyanov, A. S.; Reva, A. A.

    2011-04-01

    In January 2009, the CORONAS-PHOTON spacecraft was successfully launched. It includes a set of telescopes and spectroheliometers—TESIS—designed to image the solar corona in soft X-ray and EUV spectral ranges. Due to features of the reading system, to obtain physical information from these images, it is necessary to preprocess them, i.e., to remove the background, correct the white field, level, and clean. The paper discusses the algorithms and software developed and used for the preprocessing of images.

  11. Phase-ambiguity resolution for QPSK modulation systems. Part 1: A review

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien Manh

    1989-01-01

    Part 1 reviews the current phase-ambiguity resolution techniques for QPSK coherent modulation systems. Here, those known and published methods of resolving phase ambiguity for QPSK with and without Forward-Error-Correcting (FEC) are discussed. The necessary background is provided for a complete understanding of the second part where a new technique will be discussed. An appropriate technique to the Consultative Committee for Space Data Systems (CCSDS) is recommended for consideration in future standards on phase-ambiguity resolution for QPSK coherent modulation systems.

  12. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  13. [Study on trace elements of lake sediments by ICP-AES and XRF core scanning].

    PubMed

    Cheng, Ai-Ying; Yu, Jun-Qing; Gao, Chun-Liang; Zhang, Li-Sha; He, Xian-Hu

    2013-07-01

    It is the first time to study sediment of Toson lake in Qaidam Basin. Trace elements including Cd, Cr, Cu, Zn and Pb in lake sediment were measured by ICP-AES method, studied and optimized from different resolution methods respectively, and finally determined a optimum pretreatment system for sediment of Toson lake, namely, HCl-HNO3-HF-HClO4-H2O2 system in the proportions of 5 : 5 : 5 : 1 : 1 was determined. At the same time, the data measured by XRF core scanning were compared, the use of moisture content correction method was analyzed, and the influence of the moisture content on the scanning method was discussed. The results showed that, compared to the background value, the contents of Cd and Zn were a little higher, the content of Cr, Cu and Pb was within the background value limits. XRF core scanning was controlled by sediment elements as well as water content in sediment to some extent. The results by the two methods showed a significant positive correlation, with the correlation coefficient up to 0.673-0.925, and they have a great comparability.

  14. --No Title--

    Science.gov Websites

    2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service

  15. Attentiveness of pediatricians to primary immunodeficiency disorders

    PubMed Central

    2012-01-01

    Background Primary immunodeficiency (PID) is a cluster of serious disorders that requires special alertness on the part of the medical staff for prompt diagnosis and management of the patient. This study explored PID knowledge and experience among pediatricians of wide educational backgrounds, practicing in the United Arab Emirates (UAE). Method A self-administered questionnaire was used to determine the competency of pediatricians in their knowledge of PID disorders. This study questionnaire included questions on PID signs and symptoms, syndromes associated with immunodeficiency, screening tests, interpreting laboratory tests and case management. The participants were 263 pediatricians of diverse education working in the 27 governmental hospitals in all regions of UAE. Results The overall performance of the pediatricians did not differ based on their age, gender, origin of certification, rank, or years of experience. Of the 50 questions, 20% of pediatricians answered correctly <60% of the questions, 76% answered correctly 60 to 79% of the questions, and 4% answered correctly ≥80% of the questions. Seventeen of the 19 PID signs and symptoms were identified by 55 to 97%. Four of 5 syndromes associated with immunodeficiency were identified by 50 to 90%. Appropriate screening tests were chosen by 64 to 96%. Attention to the laboratory reference range values as function of patient age was notably limited. Conclusions There was a noteworthy deficiency in PID work-up. Therefore, implementing effective educational strategies is needed to improve the competency of pediatricians to diagnose and manage PID disorders. PMID:22846098

  16. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    PubMed

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  17. An alternative estimation of the RF-enhanced plasma temperature during SPEAR artificial heating experiments: Early results

    NASA Astrophysics Data System (ADS)

    Vickers, H.; Baddeley, L.

    2011-11-01

    RF heating of the F region plasma at high latitudes has long been known to produce electron temperature increases that can vary from tens to hundreds of percent above the background, unperturbed level. In contrast, artificial ionospheric modification experiments conducted using the Space Plasma Exploration by Active Radar (SPEAR) heating facility on Svalbard have often failed to produce obvious enhancements in the electron temperatures when measured using the European Incoherent Scatter Svalbard radar (ESR), colocated with the heater. Contamination of the ESR ion line spectra by the zero-frequency purely growing mode (PGM) feature is known to persist at varying amplitudes throughout SPEAR heating, and such spectral features can lead to significant temperature underestimations when the incoherent scatter spectra are analyzed using conventional methods. In this study, we present the first results of applying a recently developed technique to correct the PGM-contaminated spectra to SPEAR-enhanced ESR spectra and derive an alternative estimate of the SPEAR-heated electron temperature. We discuss how the effectiveness of the spectrum corrections can be affected by the data variance, estimated over the integration period. The subsequent electron temperatures, inferred from corrected spectra, range from a few tens to a few hundred Kelvin above the average background temperature. These temperatures are found to be in reasonable agreement with the theoretical “enhanced” temperature, calculated for the peak of the stationary temperature perturbation profile, when realistic absorption effects are accounted for.

  18. FACET – a “Flexible Artifact Correction and Evaluation Toolbox” for concurrently recorded EEG/fMRI data

    PubMed Central

    2013-01-01

    Background In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. Results FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230–239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720–737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. Conclusion The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches. PMID:24206927

  19. Complete NLO corrections to W+W+ scattering and its irreducible background at the LHC

    NASA Astrophysics Data System (ADS)

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-10-01

    The process pp → μ +ν μ e+νejj receives several contributions of different orders in the strong and electroweak coupling constants. Using appropriate event selections, this process is dominated by vector-boson scattering (VBS) and has recently been measured at the LHC. It is thus of prime importance to estimate precisely each contribution. In this article we compute for the first time the full NLO QCD and electroweak corrections to VBS and its irreducible background processes with realistic experimental cuts. We do not rely on approximations but use complete amplitudes involving two different orders at tree level and three different orders at one-loop level. Since we take into account all interferences, at NLO level the corrections to the VBS process and to the QCD-induced irreducible background process contribute at the same orders. Hence the two processes cannot be unambiguously distinguished, and all contributions to the μ +ν μ e+νejj final state should be preferably measured together.

  20. The effect of on-line position correction on the dose distribution in focal radiotherapy for bladder cancer

    PubMed Central

    van Rooijen, Dominique C; van de Kamer, Jeroen B; Pool, René; Hulshof, Maarten CCM; Koning, Caro CE; Bel, Arjan

    2009-01-01

    Background The purpose of this study was to determine the dosimetric effect of on-line position correction for bladder tumor irradiation and to find methods to predict and handle this effect. Methods For 25 patients with unifocal bladder cancer intensity modulated radiotherapy (IMRT) with 5 beams was planned. The requirement for each plan was that 99% of the target volume received 95% of the prescribed dose. Tumor displacements from -2.0 cm to 2.0 cm in each dimension were simulated, using 0.5 cm increments, resulting in 729 simulations per patient. We assumed that on-line correction for the tumor was applied perfectly. We determined the correlation between the change in D99% and the change in path length, which is defined here as the distance from the skin to the isocenter for each beam. In addition the margin needed to avoid underdosage was determined and the probability that an underdosage occurs in a real treatment was calculated. Results Adjustments for tumor displacement with perfect on-line position correction resulted in an altered dose distribution. The altered fraction dose to the target varied from 91.9% to 100.4% of the prescribed dose. The mean D99% (± SD) was 95.8% ± 1.0%. There was a modest linear correlation between the difference in D99% and the change in path length of the beams after correction (R2 = 0.590). The median probability that a systematic underdosage occurs in a real treatment was 0.23% (range: 0 - 24.5%). A margin of 2 mm reduced that probability to < 0.001% in all patients. Conclusion On-line position correction does result in an altered target coverage, due to changes in average path length after position correction. An extra margin can be added to prevent underdosage. PMID:19775479

  1. 4D numerical observer for lesion detection in respiratory-gated PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorsakul, Auranuch; Li, Quanzheng; Ouyang, Jinsong

    2014-10-15

    Purpose: Respiratory-gated positron emission tomography (PET)/computed tomography protocols reduce lesion smearing and improve lesion detection through a synchronized acquisition of emission data. However, an objective assessment of image quality of the improvement gained from respiratory-gated PET is mainly limited to a three-dimensional (3D) approach. This work proposes a 4D numerical observer that incorporates both spatial and temporal informations for detection tasks in pulmonary oncology. Methods: The authors propose a 4D numerical observer constructed with a 3D channelized Hotelling observer for the spatial domain followed by a Hotelling observer for the temporal domain. Realistic {sup 18}F-fluorodeoxyglucose activity distributions were simulated usingmore » a 4D extended cardiac torso anthropomorphic phantom including 12 spherical lesions at different anatomical locations (lower, upper, anterior, and posterior) within the lungs. Simulated data based on Monte Carlo simulation were obtained using GEANT4 application for tomographic emission (GATE). Fifty noise realizations of six respiratory-gated PET frames were simulated by GATE using a model of the Siemens Biograph mMR scanner geometry. PET sinograms of the thorax background and pulmonary lesions that were simulated separately were merged to generate different conditions of the lesions to the background (e.g., lesion contrast and motion). A conventional ordered subset expectation maximization (OSEM) reconstruction (5 iterations and 6 subsets) was used to obtain: (1) gated, (2) nongated, and (3) motion-corrected image volumes (a total of 3200 subimage volumes: 2400 gated, 400 nongated, and 400 motion-corrected). Lesion-detection signal-to-noise ratios (SNRs) were measured in different lesion-to-background contrast levels (3.5, 8.0, 9.0, and 20.0), lesion diameters (10.0, 13.0, and 16.0 mm), and respiratory motion displacements (17.6–31.3 mm). The proposed 4D numerical observer applied on multiple-gated images was compared to the conventional 3D approach applied on the nongated and motion-corrected images. Results: On average, the proposed 4D numerical observer improved the detection SNR by 48.6% (p < 0.005), whereas the 3D methods on motion-corrected images improved by 31.0% (p < 0.005) as compared to the nongated method. For all different conditions of the lesions, the relative SNR measurement (Gain = SNR{sub Observed}/SNR{sub Nongated}) of the 4D method was significantly higher than one from the motion-corrected 3D method by 13.8% (p < 0.02), where Gain{sub 4D} was 1.49 ± 0.21 and Gain{sub 3D} was 1.31 ± 0.15. For the lesion with the highest amplitude of motion, the 4D numerical observer yielded the highest observer-performance improvement (176%). For the lesion undergoing the smallest motion amplitude, the 4D method provided superior lesion detectability compared with the 3D method, which provided a detection SNR close to the nongated method. The investigation on a structure of the 4D numerical observer showed that a Laguerre–Gaussian channel matrix with a volumetric 3D function yielded higher lesion-detection performance than one with a 2D-stack-channelized function, whereas a different kind of channels that have the ability to mimic the human visual system, i.e., difference-of-Gaussian, showed similar performance in detecting uniform and spherical lesions. The investigation of the detection performance when increasing noise levels yielded decreasing detection SNR by 27.6% and 41.5% for the nongated and gated methods, respectively. The investigation of lesion contrast and diameter showed that the proposed 4D observer preserved the linearity property of an optimal-linear observer while the motion was present. Furthermore, the investigation of the iteration and subset numbers of the OSEM algorithm demonstrated that these parameters had impact on the lesion detectability and the selection of the optimal parameters could provide the maximum lesion-detection performance. The proposed 4D numerical observer outperformed the other observers for the lesion-detection task in various lesion conditions and motions. Conclusions: The 4D numerical observer shows substantial improvement in lesion detectability over the 3D observer method. The proposed 4D approach could potentially provide a more reliable objective assessment of the impact of respiratory-gated PET improvement for lesion-detection tasks. On the other hand, the 4D approach may be used as an upper bound to investigate the performance of the motion correction method. In future work, the authors will validate the proposed 4D approach on clinical data for detection tasks in pulmonary oncology.« less

  2. [Target volume segmentation of PET images by an iterative method based on threshold value].

    PubMed

    Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L

    2014-01-01

    An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  3. WE-AB-204-10: Evaluation of a Novel Dedicated Breast PET System (Mammi-PET)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Z; Swanson, T; O’Connor, M

    2015-06-15

    Purpose: To evaluate the performance characteristics of a novel dedicated breast PET system (Mammi-PET, Oncovision). The system has 2 detector rings giving axial/transaxial field of view of 8/17 cm. Each ring consists of 12 monolithic LYSO modules coupled to PSPMTs. Methods: Uniformity, sensitivity, energy and spatial resolution were measured according to NEMA standards. Count rate performance was investigated using a source of F-18 (1384uCi) decayed over 5 half-lives. A prototype PET phantom was imaged for 20 min to evaluate image quality, recovery coefficients and partial volume effects. Under an IRB-approved protocol, 11 patients who just underwent whole body PET/CT examsmore » were imaged prone with the breast pendulant at 5–10 minutes/breast. Image quality was assessed with and without scatter/attenuation correction and using different reconstruction algorithms. Results: Integral/differential uniformity were 9.8%/6.0% respectively. System sensitivity was 2.3% on axis, 2.2% and 2.8% at 3.8 cm and 7.8 cm off-axis. Mean energy resolution of all modules was 23.3%. Spatial resolution (FWHM) was 1.82 mm and 2.90 mm on axis and 5.8 cm off axis. Three cylinders (14 mm diameter) in the PET phantom were filled with activity concentration ratios of 4:1, 3:1, and 2:1 relative to the background. Measured cylinder to background ratios were 2.6, 1.8 and 1.5 (without corrections) and 3.6, 2.3 and 1.5 (with attenuation/scatter correction). Five cylinders (14, 10, 6, 4 and 2 mm diameter) each with an activity ratio of 4:1 were measured and showed recovery coefficients of 1, 0.66, 0.45, 0.18 and 0.18 (without corrections), and 1, 0.53, 0.30, 0.13 and 0 (with attenuation/scatter correction). Optimal phantom image quality was obtained with 3D MLEM algorithm, >20 iterations and without attenuation/scatter correction. Conclusion: The MAMMI system demonstrated good performance characteristics. Further work is needed to determine the optimal reconstruction parameters for qualitative and quantitative applications.« less

  4. Feedback on students' clinical reasoning skills during fieldwork education

    PubMed Central

    de Beer, Marianne; Mårtensson, Lena

    2015-01-01

    Background/aim Feedback on clinical reasoning skills during fieldwork education is regarded as vital in occupational therapy students' professional development. The nature of supervisors' feedback however, could be confirmative and/or corrective and corrective feedback could be with or without suggestions on how to improve. The aim of the study was to evaluate the impact of supervisors' feedback on final-year occupational therapy students' clinical reasoning skills through comparing the nature of feedback with the students' subsequent clinical reasoning ability. Method A mixed-method approach with a convergent parallel design was used combining the collection and analysis of qualitative and quantitative data. From focus groups and interviews with students, data were collected and analysed qualitatively to determine how the students experienced the feedback they received from their supervisors. By quantitatively comparing the final practical exam grades with the nature of the feedback, their fieldwork End-of-Term grades and average academic performance it became possible to merge the results for comparison and interpretation. Results Students' clinical reasoning skills seem to be improved through corrective feedback if accompanied by suggestions on how to improve, irrespective of their average academic performance. Supervisors were inclined to underrate high performing students and overrate lower performing students. Conclusions Students who obtained higher grades in the final practical examinations received more corrective feedback with suggestions on how to improve from their supervisors. Confirmative feedback alone may not be sufficient for improving the clinical reasoning skills of students. PMID:26256854

  5. Transconjuctival Incision with Lateral Paracanthal Extension for Corrective Osteotomy of Malunioned Zygoma

    PubMed Central

    Chung, Jae-Ho; You, Hi-Jin; Hwang, Na-Hyun; Yoon, Eul-Sik

    2016-01-01

    Background Conventional correction of malunioned zygoma requires complete regional exposure through a bicoronal flap combined with a lower eyelid incision and an upper buccal sulcus incision. However, there are many potential complications following bicoronal incisions, such as infection, hematoma, alopecia, scarring and nerve injury. We have adopted a zygomaticofrontal suture osteotomy technique using transconjunctival incision with lateral paracanthal extension. We performed a retrospective review of clinical cases underwent correction of malunioned zygoma with the approach to evaluate outcomes following this method. Methods Between June 2009 and September 2015, corrective osteotomies were performed in 14 patients with malunioned zygoma by a single surgeon. All 14 patients received both upper gingivobuccal and transconjunctival incisions with lateral paracanthal extension. The mean interval from injury to operation was 16 months (range, 12 months to 4 years), and the mean follow-up was 1 year (range, 4 months to 3 years). Results Our surgical approach technique allowed excellent access to the infraorbital rim, orbital floor, zygomaticofrontal suture and anterior surface of the maxilla. Of the 14 patients, only 1 patient suffered a complication—oral wound dehiscence. Among the 6 patients who received infraorbital nerve decompression, numbness was gradually relieved in 4 patients. Two patients continued to experience persistent numbness. Conclusion Transconjunctival incision with lateral paracanthal extension combined with upper gingivobuccal sulcus incision offers excellent exposure of the zygoma-orbit complex, and could be a valid alternative to the bicoronal approach for osteotomy of malunioned zygoma. PMID:28913268

  6. Intensity-corrected Herschel Observations of Nearby Isolated Low-mass Clouds

    NASA Astrophysics Data System (ADS)

    Sadavoy, Sarah I.; Keto, Eric; Bourke, Tyler L.; Dunham, Michael M.; Myers, Philip C.; Stephens, Ian W.; Di Francesco, James; Webb, Kristi; Stutz, Amelia M.; Launhardt, Ralf; Tobin, John J.

    2018-01-01

    We present intensity-corrected Herschel maps at 100, 160, 250, 350, and 500 μm for 56 isolated low-mass clouds. We determine the zero-point corrections for Herschel Photodetector Array Camera and Spectrometer (PACS) and Spectral Photometric Imaging Receiver (SPIRE) maps from the Herschel Science Archive (HSA) using Planck data. Since these HSA maps are small, we cannot correct them using typical methods. Here we introduce a technique to measure the zero-point corrections for small Herschel maps. We use radial profiles to identify offsets between the observed HSA intensities and the expected intensities from Planck. Most clouds have reliable offset measurements with this technique. In addition, we find that roughly half of the clouds have underestimated HSA-SPIRE intensities in their outer envelopes relative to Planck, even though the HSA-SPIRE maps were previously zero-point corrected. Using our technique, we produce corrected Herschel intensity maps for all 56 clouds and determine their line-of-sight average dust temperatures and optical depths from modified blackbody fits. The clouds have typical temperatures of ∼14–20 K and optical depths of ∼10‑5–10‑3. Across the whole sample, we find an anticorrelation between temperature and optical depth. We also find lower temperatures than what was measured in previous Herschel studies, which subtracted out a background level from their intensity maps to circumvent the zero-point correction. Accurate Herschel observations of clouds are key to obtaining accurate density and temperature profiles. To make such future analyses possible, intensity-corrected maps for all 56 clouds are publicly available in the electronic version. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  7. Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules

    PubMed Central

    2014-01-01

    Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516

  8. On the weight of indels in genomic distances

    PubMed Central

    2011-01-01

    Background Classical approaches to compute the genomic distance are usually limited to genomes with the same content, without duplicated markers. However, differences in the gene content are frequently observed and can reflect important evolutionary aspects. A few polynomial time algorithms that include genome rearrangements, insertions and deletions (or substitutions) were already proposed. These methods often allow a block of contiguous markers to be inserted, deleted or substituted at once but result in distance functions that do not respect the triangular inequality and hence do not constitute metrics. Results In the present study we discuss the disruption of the triangular inequality in some of the available methods and give a framework to establish an efficient correction for two models recently proposed, one that includes insertions, deletions and double cut and join (DCJ) operations, and one that includes substitutions and DCJ operations. Conclusions We show that the proposed framework establishes the triangular inequality in both distances, by summing a surcharge on indel operations and on substitutions that depends only on the number of markers affected by these operations. This correction can be applied a posteriori, without interfering with the already available formulas to compute these distances. We claim that this correction leads to distances that are biologically more plausible. PMID:22151784

  9. Chemometrics-assisted spectrophotometric green method for correcting interferences in biowaiver studies: Application to assay and dissolution profiling study of donepezil hydrochloride tablets

    NASA Astrophysics Data System (ADS)

    Korany, Mohamed A.; Mahgoub, Hoda; Haggag, Rim S.; Ragab, Marwa A. A.; Elmallah, Osama A.

    2018-06-01

    A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5 mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8 μg mL-1) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH 6.8 and dissimilarity in the other 2 pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH 1.2) and dissolution study in 3 pH media (HCl of pH 1.2, acetate buffer of pH 4.5 and phosphate buffer of pH 6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method.

  10. Chemometrics-assisted spectrophotometric green method for correcting interferences in biowaiver studies: Application to assay and dissolution profiling study of donepezil hydrochloride tablets.

    PubMed

    Korany, Mohamed A; Mahgoub, Hoda; Haggag, Rim S; Ragab, Marwa A A; Elmallah, Osama A

    2018-06-15

    A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8μgmL -1 ) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH6.8 and dissimilarity in the other 2pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH1.2) and dissolution study in 3pH media (HCl of pH1.2, acetate buffer of pH4.5 and phosphate buffer of pH6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  12. A fast combination calibration of foreground and background for pipelined ADCs

    NASA Astrophysics Data System (ADS)

    Kexu, Sun; Lenian, He

    2012-06-01

    This paper describes a fast digital calibration scheme for pipelined analog-to-digital converters (ADCs). The proposed method corrects the nonlinearity caused by finite opamp gain and capacitor mismatch in multiplying digital-to-analog converters (MDACs). The considered calibration technique takes the advantages of both foreground and background calibration schemes. In this combination calibration algorithm, a novel parallel background calibration with signal-shifted correlation is proposed, and its calibration cycle is very short. The details of this technique are described in the example of a 14-bit 100 Msample/s pipelined ADC. The high convergence speed of this background calibration is achieved by three means. First, a modified 1.5-bit stage is proposed in order to allow the injection of a large pseudo-random dithering without missing code. Second, before correlating the signal, it is shifted according to the input signal so that the correlation error converges quickly. Finally, the front pipeline stages are calibrated simultaneously rather than stage by stage to reduce the calibration tracking constants. Simulation results confirm that the combination calibration has a fast startup process and a short background calibration cycle of 2 × 221 conversions.

  13. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  14. An Approach to Extract Moving Objects from Mls Data Using a Volumetric Background Representation

    NASA Astrophysics Data System (ADS)

    Gehrung, J.; Hebel, M.; Arens, M.; Stilla, U.

    2017-05-01

    Data recorded by mobile LiDAR systems (MLS) can be used for the generation and refinement of city models or for the automatic detection of long-term changes in the public road space. Since for this task only static structures are of interest, all mobile objects need to be removed. This work presents a straightforward but powerful approach to remove the subclass of moving objects. A probabilistic volumetric representation is utilized to separate MLS measurements recorded by a Velodyne HDL-64E into mobile objects and static background. The method was subjected to a quantitative and a qualitative examination using multiple datasets recorded by a mobile mapping platform. The results show that depending on the chosen octree resolution 87-95% of the measurements are labeled correctly.

  15. Born iterative reconstruction using perturbed-phase field estimates

    PubMed Central

    Astheimer, Jeffrey P.; Waag, Robert C.

    2008-01-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements. PMID:19062873

  16. Relationship between body fat and BMI in a US Hispanic population-based cohort study: Results from HCHS/SOL

    PubMed Central

    Wong, William W.; Strizich, Garrett; Heo, Moonseong; Heymsfield, Steven B.; Himes, John H.; Rock, Cheryl L.; Gellman, Marc D.; Siega-Riz, Anna Maria; Sotres-Alvarez, Daniela; Davis, Sonia M.; Arredondo, Elva M.; Van Horn, Linda; Wylie-Rosett, Judith; Sanchez-Johnsen, Lisa; Kaplan, Robert; Mossavar-Rahmani, Yasmin

    2016-01-01

    Objective To evaluate the percentage of body fat (%BF)-BMI relationship, identify %BF levels corresponding to adult BMI cut-points, and examine %BF-BMI agreement in a diverse Hispanic/Latino population. Methods %BF by bioelectrical impedance analysis (BIA) was corrected against %BF by 18O dilution in 476 participants of the ancillary Hispanic Community Health/Latinos Studies. Corrected %BF were regressed against 1/BMI in the parent study (n=15,261), fitting models for each age group, by sex and Hispanic/Latino background; predicted %BF was then computed for each BMI cut-point. Results BIA underestimated %BF by 8.7 ± 0.3% in women and 4.6 ± 0.3% in men (P < 0.0001). The %BF-BMI relationshp was non-linear and linear for 1/BMI. Sex- and age-specific regression parameters between %BF and 1/BMI were consistent across Hispanic/Latino backgrounds (P > 0.05). The precision of the %BF-1/BMI association weakened with increasing age in men but not women. The proportion of participants classified as non-obese by BMI but obese by %BF was generally higher among women and older adults (16.4% in women vs. 12.0% in men aged 50-74 y). Conclusions %BF was linearly related to 1/BMI with consistent relationship across Hispanic/Lation backgrounds. BMI cut-points consistently underestimated the proportion of Hispanics/Latinos with excess adiposity. PMID:27184359

  17. Simulation and visualization of face seal motion stability by means of computer generated movies

    NASA Technical Reports Server (NTRS)

    Etsion, I.; Auer, B. M.

    1980-01-01

    A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.

  18. Simulation and visualization of face seal motion stability by means of computer generated movies

    NASA Technical Reports Server (NTRS)

    Etsion, I.; Auer, B. M.

    1981-01-01

    A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.

  19. Improving IUE High Dispersion Extraction

    NASA Technical Reports Server (NTRS)

    Lawton, Patricia J.; VanSteenberg, M. E.; Massa, D.

    2007-01-01

    We present a different method to extract high dispersion International Ultraviolet Explorer (IUE) spectra from the New Spectral Image Processing System (NEWSIPS) geometrically and photometrically corrected (SI HI) images of the echellogram. The new algorithm corrects many of the deficiencies that exist in the NEWSIPS high dispersion (SIHI) spectra . Specifically, it does a much better job of accounting for the overlap of the higher echelle orders, it eliminates a significant time dependency in the extracted spectra (which can be traced to the background model used in the NEWSIPS extractions), and it can extract spectra from echellogram images that are more highly distorted than the NEWSIPS extraction routines can handle. Together, these improvements yield a set of IUE high dispersion spectra whose scientific integrity is sign ificantly better than the NEWSIPS products. This work has been supported by NASA ADP grants.

  20. Calculation of background effects on the VESUVIO eV neutron spectrometer

    NASA Astrophysics Data System (ADS)

    Mayers, J.

    2011-01-01

    The VESUVIO spectrometer at the ISIS pulsed neutron source measures the momentum distribution n(p) of atoms by 'neutron Compton scattering' (NCS). Measurements of n(p) provide a unique window into the quantum behaviour of atomic nuclei in condensed matter systems. The VESUVIO 6Li-doped neutron detectors at forward scattering angles were replaced in February 2008 by yttrium aluminium perovskite (YAP)-doped γ-ray detectors. This paper compares the performance of the two detection systems. It is shown that the YAP detectors provide a much superior resolution and general performance, but suffer from a sample-dependent gamma background. This report details how this background can be calculated and data corrected. Calculation is compared with data for two different instrument geometries. Corrected and uncorrected data are also compared for the current instrument geometry. Some indications of how the gamma background can be reduced are also given.

  1. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles

    PubMed Central

    2010-01-01

    Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data. PMID:20181233

  2. Improved electron probe microanalysis of trace elements in quartz

    USGS Publications Warehouse

    Donovan, John J.; Lowers, Heather; Rusk, Brian G.

    2011-01-01

    Quartz occurs in a wide range of geologic environments throughout the Earth's crust. The concentration and distribution of trace elements in quartz provide information such as temperature and other physical conditions of formation. Trace element analyses with modern electron-probe microanalysis (EPMA) instruments can achieve 99% confidence detection of ~100 ppm with fairly minimal effort for many elements in samples of low to moderate average atomic number such as many common oxides and silicates. However, trace element measurements below 100 ppm in many materials are limited, not only by the precision of the background measurement, but also by the accuracy with which background levels are determined. A new "blank" correction algorithm has been developed and tested on both Cameca and JEOL instruments, which applies a quantitative correction to the emitted X-ray intensities during the iteration of the sample matrix correction based on a zero level (or known trace) abundance calibration standard. This iterated blank correction, when combined with improved background fit models, and an "aggregate" intensity calculation utilizing multiple spectrometer intensities in software for greater geometric efficiency, yields a detection limit of 2 to 3 ppm for Ti and 6 to 7 ppm for Al in quartz at 99% t-test confidence with similar levels for absolute accuracy.

  3. Computer-assisted determination of left ventricular endocardial borders reduces variability in the echocardiographic assessment of ejection fraction

    PubMed Central

    Maret, Eva; Brudin, Lars; Lindstrom, Lena; Nylander, Eva; Ohlsson, Jan L; Engvall, Jan E

    2008-01-01

    Background Left ventricular size and function are important prognostic factors in heart disease. Their measurement is the most frequent reason for sending patients to the echo lab. These measurements have important implications for therapy but are sensitive to the skill of the operator. Earlier automated echo-based methods have not become widely used. The aim of our study was to evaluate an automatic echocardiographic method (with manual correction if needed) for determining left ventricular ejection fraction (LVEF) based on an active appearance model of the left ventricle (syngo®AutoEF, Siemens Medical Solutions). Comparisons were made with manual planimetry (manual Simpson), visual assessment and automatically determined LVEF from quantitative myocardial gated single photon emission computed tomography (SPECT). Methods 60 consecutive patients referred for myocardial perfusion imaging (MPI) were included in the study. Two-dimensional echocardiography was performed within one hour of MPI at rest. Image quality did not constitute an exclusion criterion. Analysis was performed by five experienced observers and by two novices. Results LVEF (%), end-diastolic and end-systolic volume/BSA (ml/m2) were for uncorrected AutoEF 54 ± 10, 51 ± 16, 24 ± 13, for corrected AutoEF 53 ± 10, 53 ± 18, 26 ± 14, for manual Simpson 51 ± 11, 56 ± 20, 28 ± 15, and for MPI 52 ± 12, 67 ± 26, 35 ± 23. The required time for analysis was significantly different for all four echocardiographic methods and was for uncorrected AutoEF 79 ± 5 s, for corrected AutoEF 159 ± 46 s, for manual Simpson 177 ± 66 s, and for visual assessment 33 ± 14 s. Compared with the expert manual Simpson, limits of agreement for novice corrected AutoEF was lower than for novice manual Simpson (0.8 ± 10.5 vs. -3.2 ± 11.4 LVEF percentage points). Calculated for experts and with LVEF (%) categorized into < 30, 30–44, 45–54 and ≥ 55, kappa measure of agreement was moderate (0.44–0.53) for all method comparisons (uncorrected AutoEF not evaluated). Conclusion Corrected AutoEF reduces the variation in measurements compared with manual planimetry, without increasing the time required. The method seems especially suited for unexperienced readers. PMID:19014461

  4. [Study on the method for the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current arc full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES)].

    PubMed

    Hao, Zhi-hong; Yao, Jian-zhen; Tang, Rui-ling; Zhang, Xue-mei; Li, Wen-ge; Zhang, Qin

    2015-02-01

    The method for the determmation of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current are full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES) was established. Direct current are full spectrum direct reading atomic emission spectrometer with a large area of solid-state detectors has functions of full spectrum direct reading and real-time background correction. The new electrodes and new buffer recipe were proposed in this paper, and have applied for national patent. Suitable analytical line pairs, back ground correcting points of elements and the internal standard method were selected, and Ge was used as internal standard. Multistage currents were selected in the research on current program, and each current set different holding time to ensure that each element has a good signal to noise ratio. Continuous rising current mode selected can effectively eliminate the splash of the sample. Argon as shielding gas can eliminate CN band generating and reduce spectral background, also plays a role in stabilizing the are, and argon flow 3.5 L x min(-1) was selected. Evaporation curve of each element was made, and it was concluded that the evaporation behavior of each element is consistent, and combined with the effects of different spectrographic times on the intensity and background, the spectrographic time of 35s was selected. In this paper, national standards substances were selected as a standard series, and the standard series includes different nature and different content of standard substances which meet the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples. In the optimum experimental conditions, the detection limits for B, Mo, Ag, Sn and Pb are 1.1, 0.09, 0.01, 0.41, and 0.56 microg x g(-1) respectively, and the precisions (RSD, n=12) for B, Mo, Ag, Sn and Pb are 4.57%-7.63%, 5.14%-7.75%, 5.48%-12.30%, 3.97%-10.46%, and 4.26%-9.21% respectively. The analytical accuracy was validated by national standards and the results are in agreement with certified values. The method is simple, rapid, is an advanced analytical method for the determination of trace amounts of geochemical samples' boron, molybdenum, silver, tin and lead, and has a certain practicality.

  5. The Swiss cheese model of safety incidents: are there holes in the metaphor?

    PubMed Central

    Perneger, Thomas V

    2005-01-01

    Background Reason's Swiss cheese model has become the dominant paradigm for analysing medical errors and patient safety incidents. The aim of this study was to determine if the components of the model are understood in the same way by quality and safety professionals. Methods Survey of a volunteer sample of persons who claimed familiarity with the model, recruited at a conference on quality in health care, and on the internet through quality-related websites. The questionnaire proposed several interpretations of components of the Swiss cheese model: a) slice of cheese, b) hole, c) arrow, d) active error, e) how to make the system safer. Eleven interpretations were compatible with this author's interpretation of the model, 12 were not. Results Eighty five respondents stated that they were very or quite familiar with the model. They gave on average 15.3 (SD 2.3, range 10 to 21) "correct" answers out of 23 (66.5%) – significantly more than 11.5 "correct" answers that would expected by chance (p < 0.001). Respondents gave on average 2.4 "correct" answers regarding the slice of cheese (out of 4), 2.7 "correct" answers about holes (out of 5), 2.8 "correct" answers about the arrow (out of 4), 3.3 "correct" answers about the active error (out of 5), and 4.1 "correct" answers about improving safety (out of 5). Conclusion The interpretations of specific features of the Swiss cheese model varied considerably among quality and safety professionals. Reaching consensus about concepts of patient safety requires further work. PMID:16280077

  6. Efficient error correction for next-generation sequencing of viral amplicons

    PubMed Central

    2012-01-01

    Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430

  7. Resuscitator’s perceptions and time for corrective ventilation steps during neonatal resuscitation☆

    PubMed Central

    Sharma, Vinay; Lakshminrusimha, Satyan; Carrion, Vivien; Mathew, Bobby

    2016-01-01

    Background The 2010 neonatal resuscitation program (NRP) guidelines incorporate ventilation corrective steps (using the mnemonic – MRSOPA) into the resuscitation algorithm. The perception of neonatal providers, time taken to perform these maneuvers or the effectiveness of these additional steps has not been evaluated. Methods Using two simulated clinical scenarios of varying degrees of cardiovascular compromise –perinatal asphyxia with (i) bradycardia (heart rate – 40 min−1) and (ii) cardiac arrest, 35 NRP certified providers were evaluated for preference to performing these corrective measures, the time taken for performing these steps and time to onset of chest compressions. Results The average time taken to perform ventilation corrective steps (MRSOPA) was 48.9 ± 21.4 s. Providers were less likely to perform corrective steps and proceed directly to endotracheal intubation in the scenario of cardiac arrest as compared to a state of bradycardia. Cardiac compressions were initiated significantly sooner in the scenario of cardiac arrest 89 ± 24 s as compared to severe bradycardia 122 ± 23 s, p < 0.0001. There were no differences in the time taken to initiation of chest compressions between physicians or mid-level care providers or with the level of experience of the provider. Conclusions Effective ventilation of the lungs with corrective steps using a mask is important in most cases of neonatal resuscitation. Neonatal resuscitators prefer early endotracheal intubation and initiation of chest compressions in the presence of asystolic cardiac arrest. Corrective ventilation steps can potentially postpone initiation of chest compressions and may delay return of spontaneous circulation in the presence of severe cardiovascular compromise. PMID:25796996

  8. Using Eye-tracking to Examine How Embedding Risk Corrective Statements Improves Cigarette Risk Beliefs: Implications for Tobacco Regulatory Policy

    PubMed Central

    Lochbuehler, Kirsten; Tang, Kathy Z.; Souprountchouk, Valentina; Campetti, Dana; Cappella, Joseph N.; Kozlowski, Lynn T.; Strasser, Andrew A.

    2016-01-01

    Background Tobacco companies have deliberately used explicit and implicit misleading information in marketing campaigns. The aim of the current study was to experimentally investigate whether the editing of explicit and implicit content of a print advertisement improves smokers’ risk beliefs and smokers’ knowledge of explicit and implicit information. Methods Using a 2(explicit/implicit) x 2(accurate/misleading) between-subject design, 203 smokers were randomly assigned to one of four advertisement conditions. The manipulation of graphic content was examined as an implicit factor to convey product harm. The inclusion of a text corrective in the body of the ad was defined as the manipulated explicit factor. Participants’ eye movements and risk beliefs/recall were measured during and after ad exposure, respectively. Results Results indicate that exposure to a text corrective decreases false beliefs about the product (p < .01) and improves correct recall of information provided by the corrective (p < .05). Accurate graphic content did not alter the harmfulness of the product. Independent of condition, smokers who focused longer on the warning label made fewer false inferences about the product (p = .01) and were more likely to correctly recall the warning information (p < .01). Nonetheless, most smokers largely ignored the text warning. Conclusions Embedding a corrective statement in the body of the ad is an effective strategy to convey health information to consumers, which can be mandated under the Tobacco Control Act (2009). Eye-tracking results objectively demonstrate that text-only warnings are not viewed by smokers, thus minimizing their effectiveness for conveying risk information. PMID:27160034

  9. Platysma Flap with Z-Plasty for Correction of Post-Thyroidectomy Swallowing Deformity

    PubMed Central

    Jeon, Min Kyeong; Kang, Seok Joo

    2013-01-01

    Background Recently, the number of thyroid surgery cases has been increasing; consequently, the number of patients who visit plastic surgery departments with a chief complaint of swallowing deformity has also increased. We performed a scar correction technique on post-thyroidectomy swallowing deformity via platysma flap with Z-plasty and obtained satisfactory aesthetic and functional outcomes. Methods The authors performed operations upon 18 patients who presented a definitive retraction on the swallowing mechanism as an objective sign of swallowing deformity, or throat or neck discomfort on swallowing mechanism such as sensation of throat traction as a subjective sign after thyoridectomy from January 2009 till June 2012. The scar tissue that adhered to the subcutaneous tissue layer was completely excised. A platysma flap as mobile interference was applied to remove the continuity of the scar adhesion, and additionally, Z-plasty for prevention of midline platysma banding was performed. Results The follow-up results of the 18 patients indicated that the definitive retraction on the swallowing mechanism was completely removed. Throat or neck discomfort on the swallowing mechanism such as sensation of throat traction also was alleviated in all 18 patients. When preoperative and postoperative Vancouver scar scales were compared to each other, the scale had decreased significantly after surgery (P<0.05). Conclusions Our simple surgical method involved the formation of a platysma flap with Z-plasty as mobile interference for the correction of post-thyroidectomy swallowing deformity. This method resulted in aesthetically and functionally satisfying outcomes. PMID:23898442

  10. Estimating extinction using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Meingast, Stefan; Lombardi, Marco; Alves, João

    2017-05-01

    Dust extinction is the most robust tracer of the gas distribution in the interstellar medium, but measuring extinction is limited by the systematic uncertainties involved in estimating the intrinsic colors to background stars. In this paper we present a new technique, Pnicer, that estimates intrinsic colors and extinction for individual stars using unsupervised machine learning algorithms. This new method aims to be free from any priors with respect to the column density and intrinsic color distribution. It is applicable to any combination of parameters and works in arbitrary numbers of dimensions. Furthermore, it is not restricted to color space. Extinction toward single sources is determined by fitting Gaussian mixture models along the extinction vector to (extinction-free) control field observations. In this way it becomes possible to describe the extinction for observed sources with probability densities, rather than a single value. Pnicer effectively eliminates known biases found in similar methods and outperforms them in cases of deep observational data where the number of background galaxies is significant, or when a large number of parameters is used to break degeneracies in the intrinsic color distributions. This new method remains computationally competitive, making it possible to correctly de-redden millions of sources within a matter of seconds. With the ever-increasing number of large-scale high-sensitivity imaging surveys, Pnicer offers a fast and reliable way to efficiently calculate extinction for arbitrary parameter combinations without prior information on source characteristics. The Pnicer software package also offers access to the well-established Nicer technique in a simple unified interface and is capable of building extinction maps including the Nicest correction for cloud substructure. Pnicer is offered to the community as an open-source software solution and is entirely written in Python.

  11. A method for automatically extracting infectious disease-related primers and probes from the literature

    PubMed Central

    2010-01-01

    Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041

  12. Method for auto-alignment of digital optical phase conjugation systems based on digital propagation

    PubMed Central

    Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei

    2014-01-01

    Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5∘, and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems. PMID:24977504

  13. Method for auto-alignment of digital optical phase conjugation systems based on digital propagation.

    PubMed

    Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei

    2014-06-16

    Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5° and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems.

  14. Automatic analysis of ciliary beat frequency using optical flow

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Lechner, Manuel; Werther, Tobias; Horak, Fritz; Hummel, Johann; Birkfellner, Wolfgang

    2012-02-01

    Ciliary beat frequency (CBF) can be a useful parameter for diagnosis of several diseases, as e.g. primary ciliary dyskinesia. (PCD). CBF computation is usually done using manual evaluation of high speed video sequences, a tedious, observer dependent, and not very accurate procedure. We used the OpenCV's pyramidal implementation of the Lukas-Kanade algorithm for optical flow computation and applied this to certain objects to follow the movements. The objects were chosen by their contrast applying the corner detection by Shi and Tomasi. Discrimination between background/noise and cilia by a frequency histogram allowed to compute the CBF. Frequency analysis was done using the Fourier transform in matlab. The correct number of Fourier summands was found by the slope in an approximation curve. The method showed to be usable to distinguish between healthy and diseased samples. However there remain difficulties in automatically identifying the cilia, and also in finding enough high contrast cilia in the image. Furthermore the some of the higher contrast cilia are lost (and sometimes found) by the method, an easy way to distinguish the correct sub-path of a point's path have yet to be found in the case where the slope methods doesn't work.

  15. Detecting benzoyl peroxide in wheat flour by line-scan macro-scale Raman chemical imaging

    NASA Astrophysics Data System (ADS)

    Qin, Jianwei; Kim, Moon S.; Chao, Kuanglin; Gonzalez, Maria; Cho, Byoung-Kwan

    2017-05-01

    Excessive use of benzoyl peroxide (BPO, a bleaching agent) in wheat flour can destroy flour nutrients and cause diseases to consumers. A macro-scale Raman chemical imaging method was developed for direct detection of BPO mixed in the wheat flour. A 785 nm line laser was used in a line-scan Hyperspectral Raman imaging system. Raman images were collected from wheat flour mixed with BPO at eight concentrations (w/w) from 50 to 6,400 ppm. A sample holder (150×100×2 mm3) was used to present a thin layer (2 mm thick) of the powdered sample for image acquisition. A baseline correction method was used to correct the fluctuating fluorescence signals from the wheat flour. To isolate BPO particles from the flour background, a simple thresholding method was applied to the single-band fluorescence-free images at a unique Raman peak wavenumber (i.e., 1001 cm-1) preselected for the BPO detection. Chemical images were created to detect and map the BPO particles. Limit of detection for the BPO was estimated in the order of 50 ppm, which is on the same level with regulatory standards.

  16. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  17. Improvement of Aerosol Optical Depth Retrieval over Hong Kong from a Geostationary Meteorological Satellite Using Critical Reflectance with Background Optical Depth Correction

    NASA Technical Reports Server (NTRS)

    Kim, Mijin; Kim, Jhoon; Wong, Man Sing; Yoon, Jongmin; Lee, Jaehwa; Wu, Dong L.; Chan, P.W.; Nichol, Janet E.; Chung, Chu-Yong; Ou, Mi-Lim

    2014-01-01

    Despite continuous efforts to retrieve aerosol optical depth (AOD) using a conventional 5-channelmeteorological imager in geostationary orbit, the accuracy in urban areas has been poorer than other areas primarily due to complex urban surface properties and mixed aerosol types from different emission sources. The two largest error sources in aerosol retrieval have been aerosol type selection and surface reflectance. In selecting the aerosol type from a single visible channel, the season-dependent aerosol optical properties were adopted from longterm measurements of Aerosol Robotic Network (AERONET) sun-photometers. With the aerosol optical properties obtained fromthe AERONET inversion data, look-up tableswere calculated by using a radiative transfer code: the Second Simulation of the Satellite Signal in the Solar Spectrum (6S). Surface reflectance was estimated using the clear sky composite method, awidely used technique for geostationary retrievals. Over East Asia, the AOD retrieved from the Meteorological Imager showed good agreement, although the values were affected by cloud contamination errors. However, the conventional retrieval of the AOD over Hong Kong was largely underestimated due to the lack of information on the aerosol type and surface properties. To detect spatial and temporal variation of aerosol type over the area, the critical reflectance method, a technique to retrieve single scattering albedo (SSA), was applied. Additionally, the background aerosol effect was corrected to improve the accuracy of the surface reflectance over Hong Kong. The AOD retrieved froma modified algorithmwas compared to the collocated data measured by AERONET in Hong Kong. The comparison showed that the new aerosol type selection using the critical reflectance and the corrected surface reflectance significantly improved the accuracy of AODs in Hong Kong areas,with a correlation coefficient increase from0.65 to 0.76 and a regression line change from tMI [basic algorithm] = 0.41tAERONET + 0.16 to tMI [new algorithm] = 0.70tAERONET + 0.01.

  18. A Study on Micropipetting Detection Technology of Automatic Enzyme Immunoassay Analyzer.

    PubMed

    Shang, Zhiwu; Zhou, Xiangping; Li, Cheng; Tsai, Sang-Bing

    2018-04-10

    In order to improve the accuracy and reliability of micropipetting, a method of micro-pipette detection and calibration combining the dynamic pressure monitoring in pipetting process and quantitative identification of pipette volume in image processing was proposed. Firstly, the normalized pressure model for the pipetting process was established with the kinematic model of the pipetting operation, and the pressure model is corrected by the experimental method. Through the pipetting process pressure and pressure of the first derivative of real-time monitoring, the use of segmentation of the double threshold method as pipetting fault evaluation criteria, and the pressure sensor data are processed by Kalman filtering, the accuracy of fault diagnosis is improved. When there is a fault, the pipette tip image is collected through the camera, extract the boundary of the liquid region by the background contrast method, and obtain the liquid volume in the tip according to the geometric characteristics of the pipette tip. The pipette deviation feedback to the automatic pipetting module and deviation correction is carried out. The titration test results show that the combination of the segmented pipetting kinematic model of the double threshold method of pressure monitoring, can effectively real-time judgment and classification of the pipette fault. The method of closed-loop adjustment of pipetting volume can effectively improve the accuracy and reliability of the pipetting system.

  19. Effect of metal artifact reduction software on image quality of C-arm cone-beam computed tomography during intracranial aneurysm treatment.

    PubMed

    Enomoto, Yukiko; Yamauchi, Keita; Asano, Takahiko; Otani, Katharina; Iwama, Toru

    2018-01-01

    Background and purpose C-arm cone-beam computed tomography (CBCT) has the drawback that image quality is degraded by artifacts caused by implanted metal objects. We evaluated whether metal artifact reduction (MAR) prototype software can improve the subjective image quality of CBCT images of patients with intracranial aneurysms treated with coils or clips. Materials and methods Forty-four patients with intracranial aneurysms implanted with coils (40 patients) or clips (four patients) underwent one CBCT scan from which uncorrected and MAR-corrected CBCT image datasets were reconstructed. Three blinded readers evaluated the image quality of the image sets using a four-point scale (1: Excellent, 2: Good, 3: Poor, 4: Bad). The median scores of the three readers of uncorrected and MAR-corrected images were compared with the paired Wilcoxon signed-rank and inter-reader agreement of change scores was assessed by weighted kappa statistics. The readers also recorded new clinical findings, such as intracranial hemorrhage, air, or surrounding anatomical structures on MAR-corrected images. Results The image quality of MAR-corrected CBCT images was significantly improved compared with the uncorrected CBCT image ( p < 0.001). Additional clinical findings were seen on CBCT images of 70.4% of patients after MAR correction. Conclusion MAR software improved image quality of CBCT images degraded by metal artifacts.

  20. 77 FR 18914 - National Motor Vehicle Title Information System (NMVTIS): Technical Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-29

    ... 1121-AA79 National Motor Vehicle Title Information System (NMVTIS): Technical Corrections AGENCY... (OJP) is promulgating this direct final rule for its National Motor Vehicle Title Information System... INFORMATION CONTACT paragraph. II. Background The National Motor Vehicle Title Information System was...

  1. MO-F-CAMPUS-J-05: Toward MRI-Only Radiotherapy: Novel Tissue Segmentation and Pseudo-CT Generation Techniques Based On T1 MRI Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aouadi, S; McGarry, M; Hammoud, R

    Purpose: To develop and validate a 4 class tissue segmentation approach (air cavities, background, bone and soft-tissue) on T1 -weighted brain MRI and to create a pseudo-CT for MRI-only radiation therapy verification. Methods: Contrast-enhanced T1-weighted fast-spin-echo sequences (TR = 756ms, TE= 7.152ms), acquired on a 1.5T GE MRI-Simulator, are used.MRIs are firstly pre-processed to correct for non uniformity using the non parametric, non uniformity intensity normalization algorithm. Subsequently, a logarithmic inverse scaling log(1/image) is applied, prior to segmentation, to better differentiate bone and air from soft-tissues. Finally, the following method is enrolled to classify intensities into air cavities, background, bonemore » and soft-tissue:Thresholded region growing with seed points in image corners is applied to get a mask of Air+Bone+Background. The background is, afterward, separated by the scan-line filling algorithm. The air mask is extracted by morphological opening followed by a post-processing based on knowledge about air regions geometry. The remaining rough bone pre-segmentation is refined by applying 3D geodesic active contours; bone segmentation evolves by the sum of internal forces from contour geometry and external force derived from image gradient magnitude.Pseudo-CT is obtained by assigning −1000HU to air and background voxels, performing linear mapping of soft-tissue MR intensities in [-400HU, 200HU] and inverse linear mapping of bone MR intensities in [200HU, 1000HU]. Results: Three brain patients having registered MRI and CT are used for validation. CT intensities classification into 4 classes is performed by thresholding. Dice and misclassification errors are quantified. Correct classifications for soft-tissue, bone, and air are respectively 89.67%, 77.8%, and 64.5%. Dice indices are acceptable for bone (0.74) and soft-tissue (0.91) but low for air regions (0.48). Pseudo-CT produces DRRs with acceptable clinical visual agreement to CT-based DRR. Conclusion: The proposed approach makes it possible to use T1-weighted MRI to generate accurate pseudo-CT from 4-class segmentation.« less

  2. Effects of ocular aberrations on contrast detection in noise.

    PubMed

    Liang, Bo; Liu, Rong; Dai, Yun; Zhou, Jiawei; Zhou, Yifeng; Zhang, Yudong

    2012-08-06

    We use adaptive optics (AO) techniques to manipulate the ocular aberrations and elucidate the effects of these ocular aberrations on contrast detection in a noisy background. The detectability of sine wave gratings at frequencies of 4, 8, and 16 circles per degree (cpd) was measured in a standard two-interval force-choice staircase procedure against backgrounds of various levels of white noise. The observer's ocular aberrations were either corrected with AO or left uncorrected. In low levels of external noise, contrast detection thresholds are always lowered by AO correction, whereas in high levels of external noise, they are generally elevated by AO correction. Higher levels of external noise are required to make this threshold elevation observable when signal spatial frequencies increase from 4 to 16 cpd. The linear-amplifier-model fit shows that mostly sampling efficiency and equivalent noise both decrease with AO correction. Our findings indicate that ocular aberrations could be beneficial for contrast detection in high-level noises. The implications of these findings are discussed.

  3. The use of polymethyl-methacrylate (Artecoll) as an adjunct to facial reconstruction

    PubMed Central

    Mok, David; Schwarz, Jorge

    2004-01-01

    BACKGROUND: Injectable polymethyl-methacrylate (PMMA) microspheres, or Artecoll, has been used for the last few years in aesthetic surgery as long-term tissue filler for the correction of wrinkles and for lip augmentation. This paper presents three cases of the use of PMMA microsphere injection for reconstructive patients with defects of varying etiologies. These cases provide examples of a novel adjunct to the repertoire of the reconstructive surgeon. OBJECTIVES: To evaluate the effectiveness (short- and long-term) of PMMA injection for the correction of small soft tissue defects of the face. METHODS: Three case histories are presented. They include the origin of the defect; previous reconstructions of the defect; and area, volume, timing and technical particularities of PMMA administration. RESULTS: All three cases showed improvement of the defect with the PMMA injection with respect to both objective evidence and patient satisfaction. The improvements can still be seen after several years. CONCLUSIONS: PMMA microsphere injection can be effectively used to correct selected small facial defects in reconstructive cases and the results are long lasting. PMID:24115873

  4. Creating trauma-informed correctional care: a balance of goals and environment

    PubMed Central

    Miller, Niki A.; Najavits, Lisa M.

    2012-01-01

    Background Rates of posttraumatic stress disorder and exposure to violence among incarcerated males and females in the US are exponentially higher than rates among the general population; yet, abrupt detoxification from substances, the pervasive authoritative presence and sensory and environmental trauma triggers can pose a threat to individual and institutional stability during incarceration. Objective The authors explore the unique challenges and promises of trauma-informed correctional care and suggest strategies for administrative support, staff development, programming, and relevant clinical approaches. Method A review of literature includes a comparison of gendered responses, implications for men's facilities, and the compatibility of trauma recovery goals and forensic programming goals. Results Trauma-informed care demonstrates promise in increasing offender responsivity to evidence-based cognitive behavioral programming that reduces criminal risk factors and in supporting integrated programming for offenders with substance abuse and co-occurring disorders. Conclusions Incorporating trauma recovery principles into correctional environments requires an understanding of criminal justice priorities, workforce development, and specific approaches to screening, assessment, and programming that unify the goals of clinical and security staff. PMID:22893828

  5. Electrovacuum solutions in nonlocal gravity

    NASA Astrophysics Data System (ADS)

    Fernandes, Karan; Mitra, Arpita

    2018-05-01

    We consider the coupling of the electromagnetic field to a nonlocal gravity theory comprising of the Einstein-Hilbert action in addition to a nonlocal R □-2R term associated with a mass scale m . We demonstrate that in the case of the minimally coupled electromagnetic field, real corrections about the Reissner-Nordström background only exist between the inner Cauchy horizon and the event horizon of the black hole. This motivates us to consider the modified coupling of electromagnetism to this theory via the Kaluza ansatz. The Kaluza reduction introduces nonlocal terms involving the electromagnetic field to the pure gravitational nonlocal theory. An iterative approach is provided to perturbatively solve the equations of motion to arbitrary order in m2 about any known solution of general relativity. We derive the first-order corrections and demonstrate that the higher order corrections are real and perturbative about the external background of a Reissner-Nordström black hole. We also discuss how the Kaluza reduced action, through the inclusion of nonlocal electromagnetic fields, could also be relevant in quantum effects on curved backgrounds with horizons.

  6. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  7. Measuring Patient Adherence to Malaria Treatment: A Comparison of Results from Self-Report and a Customised Electronic Monitoring Device

    PubMed Central

    Bruxvoort, Katia; Festo, Charles; Cairns, Matthew; Kalolella, Admirabilis; Mayaya, Frank; Kachur, S. Patrick; Schellenberg, David; Goodman, Catherine

    2015-01-01

    Background Self-report is the most common and feasible method for assessing patient adherence to medication, but can be prone to recall bias and social desirability bias. Most studies assessing adherence to artemisinin-based combination therapies (ACTs) have relied on self-report. In this study, we use a novel customised electronic monitoring device—termed smart blister packs—to examine the validity of self-reported adherence to artemether-lumefantrine (AL) in southern Tanzania. Methods Smart blister packs were designed to look identical to locally available AL blister packs and to record the date and time each tablet was removed from packaging. Patients obtaining AL at randomly selected health facilities and drug stores were followed up at home three days later and interviewed about each dose of AL taken. Blister packs were requested for pill count and extraction of smart blister pack data. Results Data on adherence from both self-report verified by pill count and smart blister packs were available for 696 of 1,204 patients. There was no difference between methods in the proportion of patients assessed to have completed treatment (64% and 67%, respectively). However, the percentage taking the correct number of pills for each dose at the correct times (timely completion) was higher by self-report than smart blister packs (37% vs. 24%; p<0.0001). By smart blister packs, 64% of patients completing treatment did not take the correct number of pills per dose or did not take each dose at the correct time interval. Conclusion Smart blister packs resulted in lower estimates of timely completion of AL and may be less prone to recall and social desirability bias. They may be useful when data on patterns of adherence are desirable to evaluate treatment outcomes. Improved methods of collecting self-reported data are needed to minimise bias and maximise comparability between studies. PMID:26214848

  8. Data-driven approach for creating synthetic electronic medical records.

    PubMed

    Buczak, Anna L; Babin, Steven; Moniz, Linda

    2010-10-14

    New algorithms for disease outbreak detection are being developed to take advantage of full electronic medical records (EMRs) that contain a wealth of patient information. However, due to privacy concerns, even anonymized EMRs cannot be shared among researchers, resulting in great difficulty in comparing the effectiveness of these algorithms. To bridge the gap between novel bio-surveillance algorithms operating on full EMRs and the lack of non-identifiable EMR data, a method for generating complete and synthetic EMRs was developed. This paper describes a novel methodology for generating complete synthetic EMRs both for an outbreak illness of interest (tularemia) and for background records. The method developed has three major steps: 1) synthetic patient identity and basic information generation; 2) identification of care patterns that the synthetic patients would receive based on the information present in real EMR data for similar health problems; 3) adaptation of these care patterns to the synthetic patient population. We generated EMRs, including visit records, clinical activity, laboratory orders/results and radiology orders/results for 203 synthetic tularemia outbreak patients. Validation of the records by a medical expert revealed problems in 19% of the records; these were subsequently corrected. We also generated background EMRs for over 3000 patients in the 4-11 yr age group. Validation of those records by a medical expert revealed problems in fewer than 3% of these background patient EMRs and the errors were subsequently rectified. A data-driven method was developed for generating fully synthetic EMRs. The method is general and can be applied to any data set that has similar data elements (such as laboratory and radiology orders and results, clinical activity, prescription orders). The pilot synthetic outbreak records were for tularemia but our approach may be adapted to other infectious diseases. The pilot synthetic background records were in the 4-11 year old age group. The adaptations that must be made to the algorithms to produce synthetic background EMRs for other age groups are indicated.

  9. 40 CFR 1065.805 - Sampling system.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...

  10. 40 CFR 1065.805 - Sampling system.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...

  11. 40 CFR 1065.805 - Sampling system.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Sampling system. 1065.805 Section 1065.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS... background samples for correcting dilution air for background concentrations of alcohols and carbonyls. (c...

  12. Ratios of W and Z cross sections at large boson $$p_T$$ as a constraint on PDFs and background to new physics

    DOE PAGES

    Malik, Sarah Alam; Watt, Graeme

    2014-02-05

    We motivate a measurement of various ratios of W and Z cross sections at the Large Hadron Collider (LHC) at large values of the boson transverse momentum (p T ≳ M W,Z ). We study the dependence of predictions for these cross-section ratios on the multiplicity of associated jets, the boson p T and the LHC centre-of-mass energy. We present the flavour decomposition of the initial-state partons and an evaluation of the theoretical uncertainties. We also show that the W + /W - ratio is sensitive to the up-quark to down-quark ratio of parton distribution functions (PDFs), while other theoreticalmore » uncertainties are negligible, meaning that a precise measurement of the W + /W - ratio at large boson p T values could constrain the PDFs at larger momentum fractions x than the usual inclusive W charge asymmetry. The W ± /Z ratio is insensitive to PDFs and most other theoretical uncertainties, other than possibly electroweak corrections, and a precise measurement will therefore be useful in validating theoretical predictions needed in data-driven methods, such as using W (→ ℓν) + jets events to estimate the Z(→ νν¯) + jets background in searches for new physics at the LHC. Furthermore, the differential W and Z cross sections themselves, dσ/dp T , have the potential to constrain the gluon distribution, provided that theoretical uncertainties from higher-order QCD and electroweak corrections are brought under control, such as by inclusion of anticipated next-to-next-to-leading order QCD corrections.« less

  13. Quantitative analysis of synthetic dyes in lipstick by micellar electrokinetic capillary chromatography.

    PubMed

    Desiderio, C; Marra, C; Fanali, S

    1998-06-01

    The separation of synthetic dyes, used as color additives in cosmetics, by micellar electrokinetic capillary chromatography (MEKC) is described in this study. The separation of seven dyes, namely eosine, erythrosine, cyanosine, rhodamine B, orange II, chromotrope FB and tartrazine has been achieved in about 3 min in an untreated fused silica capillary containing as background electrolyte a 25 mM tetraborate/phosphate buffer, pH 8.0, and 30 mM sodium dodecyl sulfate. The electrophoretic method exhibits precision and relatively high sensitivity. A detection limit (LOD, signal/noise = 3) in the range of 5-7.5 X 10(-7) M of standard compounds was recorded. Intra-day repeatability of all the studied dye determinations (8 runs) gave the following results (limit values), % standard deviation: 0.24-1.54% for migration time, 0.99-1.24% for corrected peak areas, 0.99-1.24% for corrected peak area ratio (analyte/internal standard) and 1.56-2.74% for peak areas. The optimized method was successfully applied to the analysis of a lipstick sample where eosine and cyanosine were present.

  14. A Data Assimilation System For Operational Weather Forecast In Galicia Region (nw Spain)

    NASA Astrophysics Data System (ADS)

    Balseiro, C. F.; Souto, M. J.; Pérez-Muñuzuri, V.; Brewster, K.; Xue, M.

    Regional weather forecast models, such as the Advanced Regional Prediction System (ARPS), over complex environments with varying local influences require an accurate meteorological analysis that should include all local meteorological measurements available. In this work, the ARPS Data Analysis System (ADAS) (Xue et al. 2001) is applied as a three-dimensional weather analysis tool to include surface station and rawinsonde data with the NCEP AVN forecasts as the analysis background. Currently in ADAS, a set of five meteorological variables are considered during the analysis: horizontal grid-relative wind components, pressure, potential temperature and spe- cific humidity. The analysis is used for high resolution numerical weather prediction for the Galicia region. The analysis method used in ADAS is based on the successive corrective scheme of Bratseth (1986), which asymptotically approaches the result of a statistical (optimal) interpolation, but at lower computational cost. As in the optimal interpolation scheme, the Bratseth interpolation method can take into account the rel- ative error between background and observational data, therefore they are relatively insensitive to large variations in data density and can integrate data of mixed accuracy. This method can be applied economically in an operational setting, providing signifi- cant improvement over the background model forecast as well as any analysis without high-resolution local observations. A one-way nesting is applied for weather forecast in Galicia region, and the use of this assimilation system in both domains shows better results not only in initial conditions but also in all forecast periods. Bratseth, A.M. (1986): "Statistical interpolation by means of successive corrections." Tellus, 38A, 439-447. Souto, M. J., Balseiro, C. F., Pérez-Muñuzuri, V., Xue, M. Brewster, K., (2001): "Im- pact of cloud analysis on numerical weather prediction in the galician region of Spain". Submitted to Journal of Applied Meteorology. Xue, M., Wang. D., Gao, J., Brewster, K, Droegemeier, K. K., (2001): "The Advanced Regional Prediction System (ARPS), storm-scale numerical weather prediction and data assimilation". Meteor. Atmos Physics. Accepted

  15. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    PubMed

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  16. Central Stars of Planetary Nebulae in the LMC

    NASA Technical Reports Server (NTRS)

    Bianchi, Luciana

    2004-01-01

    In FUSE cycle 2's program B001 we studied Central Stars of Planetary Nebulae (CSPN) in the Large Magellanic Could. All FUSE observations have been successfully completed and have been reduced, analyzed and published. The analysis and the results are summarized below. The FUSE data were reduced using the latest available version of the FUSE calibration pipeline (CALFUSE v2.2.2). The flux of these LMC post-AGB objects is at the threshold of FUSE's sensitivity, and thus special care in the background subtraction was needed during the reduction. Because of their faintness, the targets required many orbit-long exposures, each of which typically had low (target) count-rates. Each calibrated extracted sequence was checked for unacceptable count-rate variations (a sign of detector drift), misplaced extraction windows, and other anomalies. All the good calibrated exposures were combined using FUSE pipeline routines. The default FUSE pipeline attempts to model the background measured off-target and subtracts it from the target spectrum. We found that, for these faint objects, the background appeared to be over-estimated by this method, particularly at shorter wavelengths (i.e., < 1000 A). We therefore tried two other reductions. In the first method, subtraction of the measured background is turned off and and the background is taken to be the model scattered-light scaled by the exposure time. In the second one, the first few steps of the pipeline were run on the individual exposures (correcting for effects unique to each exposure such as Doppler shift, grating motions, etc). Then the photon lists from the individual exposures were combined, and the remaining steps of the pipeline run on the combined file. Thus, more total counts for both the target and background allowed for a better extraction.

  17. The effect of changes in core body temperature on the QT interval in beagle dogs: a previously ignored phenomenon, with a method for correction

    PubMed Central

    van der Linde, H J; Van Deuren, B; Teisman, A; Towart, R; Gallacher, D J

    2008-01-01

    Background and purpose: Body core temperature (Tc) changes affect the QT interval, but correction for this has not been systematically investigated. It may be important to correct QT intervals for drug-induced changes in Tc. Experimental approach: Anaesthetized beagle dogs were artificially cooled (34.2 °C) or warmed (42.1 °C). The relationship between corrected QT intervals (QTcV; QT interval corrected according to the Van de Water formula) and Tc was analysed. This relationship was also examined in conscious dogs where Tc was increased by exercise. Key results: When QTcV intervals were plotted against changes in Tc, linear correlations were observed in all individual dogs. The slopes did not significantly differ between cooling (−14.85±2.08) or heating (−13.12±3.46) protocols. We propose a correction formula to compensate for the influence of Tc changes and standardize the QTcV duration to 37.5 °C: QTcVcT (QTcV corrected for changes in core temperature)=QTcV–14 (37.5 – Tc). Furthermore, cooled dogs were re-warmed (from 34.2 to 40.0 °C) and marked QTcV shortening (−29%) was induced. After Tc correction, using the above formula, this decrease was abolished. In these re-warmed dogs, we observed significant increases in T-wave amplitude and in serum [K+] levels. No arrhythmias or increase in pro-arrhythmic biomarkers were observed. In exercising dogs, the above formula completely compensated QTcV for the temperature increase. Conclusions and implications: This study shows the importance of correcting QTcV intervals for changes in Tc, to avoid misleading interpretations of apparent QTcV interval changes. We recommend that all ICH S7A, conscious animal safety studies should routinely measure core body temperature and correct QTcV appropriately, if body temperature and heart rate changes are observed. PMID:18574451

  18. Theoretical analysis and experimental study of constraint boundary conditions for acquiring the beacon in satellite-ground laser communications

    NASA Astrophysics Data System (ADS)

    Yu, Siyuan; Wu, Feng; Wang, Qiang; Tan, Liying; Ma, Jing

    2017-11-01

    Acquisition and recognition for the beacon is the core technology of establishing the satellite optical link. In order to acquire the beacon correctly, the beacon image should be recognized firstly, excluding the influence of the background light. In this processing, many factors will influence the recognition precision of the beacon. This paper studies the constraint boundary conditions for acquiring the beacon from the perspective of theory and experiment, and as satellite-ground laser communications, an approach for obtaining the adaptive segmentation method is also proposed. Finally, the long distance laser communication experiment (11.16 km) verifies the validity of this method and the tracking error with the method is the least compared with the traditional approaches. The method helps to greatly improve the tracking precision in the satellite-ground laser communications.

  19. The effect of illustrations on patient comprehension of medication instruction labels

    PubMed Central

    Hwang, Stephen W; Tram, Carolyn QN; Knarr, Nadia

    2005-01-01

    Background Labels with special instructions regarding how a prescription medication should be taken or its possible side effects are often applied to pill bottles. The goal of this study was to determine whether the addition of illustrations to these labels affects patient comprehension. Methods Study participants (N = 130) were enrolled by approaching patients at three family practice clinics in Toronto, Canada. Participants were asked to interpret two sets of medication instruction labels, the first with text only and the second with the same text accompanied by illustrations. Two investigators coded participants' responses as incorrect, partially correct, or completely correct. Health literacy levels of participants were measured using a validated instrument, the REALM test. Results All participants gave a completely correct interpretation for three out of five instruction labels, regardless of whether illustrations were present or not. For the two most complex labels, only 34–55% of interpretations of the text-only version were completely correct. The addition of illustrations was associated with improved performance in 5–7% of subjects and worsened performance in 7–9% of subjects. Conclusion The commonly-used illustrations on the medication labels used in this study were of little or no use in improving patients' comprehension of the accompanying written instructions. PMID:15960849

  20. Insights into Inpatients with Poor Vision: A High Value Proposition

    PubMed Central

    Press, Valerie G.; Matthiesen, Madeleine I.; Ranadive, Alisha; Hariprasad, Seenu M.; Meltzer, David O.; Arora, Vineet M.

    2015-01-01

    Background Vision impairment is an under-recognized risk factor for adverse events among hospitalized patients, yet vision is neither routinely tested nor documented for inpatients. Low-cost ($8 and up) non-prescription ‘readers’ may be a simple, high-value intervention to improve inpatients’ vision. We aimed to study initial feasibility and efficacy of screening and correcting inpatients’ vision. Methods From June 2012 through January 2014 we began testing whether participants’ vision corrected with non-prescription lenses for eligible participants failing a vision screen (Snellen chart) performed by research assistants (RAs). Descriptive statistics and tests of comparison, including t-tests and chi-squared tests, were used when appropriate. All analyses were performed using Stata version 12 (StataCorps, College Station, TX). Results Over 800 participants’ vision was screened (n=853). Older (≥65 years; 56%) participants were more likely to have insufficient vision than younger (<65 years; 28%; p<0.001). Non-prescription readers corrected the majority of eligible participants’ vision (82%, 95/116). Discussion Among an easily identified sub-group of inpatients with poor vision, low-cost ‘readers’ successfully corrected most participants’ vision. Hospitalists and other clinicians working in the inpatient setting can play an important role in identifying opportunities to provide high-value care related to patients’ vision. PMID:25755206

  1. Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI

    PubMed Central

    Rank, Christopher M.; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A.; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc

    2017-01-01

    Objectives Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Methods Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. Results The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient’s diagnosis. Conclusion Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts. PMID:28817656

  2. Massless spectra and gauge couplings at one-loop on non-factorisable toroidal orientifolds

    NASA Astrophysics Data System (ADS)

    Berasaluce-González, Mikel; Honecker, Gabriele; Seifert, Alexander

    2018-01-01

    So-called 'non-factorisable' toroidal orbifolds can be rewritten in a factorised form as a product of three two-tori by imposing an additional shift symmetry. This finding of Blaszczyk et al. [1] provides a new avenue to Conformal Field Theory methods, by which the vector-like massless matter spectrum - and thereby the type of gauge group enhancement on orientifold invariant fractional D6-branes - and the one-loop corrections to the gauge couplings in Type IIA orientifold theories can be computed in addition to the well-established chiral matter spectrum derived from topological intersection numbers among three-cycles. We demonstrate this framework for the Z4 × ΩR orientifolds on the A3 ×A1 ×B2-type torus. As observed before for factorisable backgrounds, also here the one-loop correction can drive the gauge groups to stronger coupling as demonstrated by means of a four-generation Pati-Salam example.

  3. Color standardization in whole slide imaging using a color calibration slide

    PubMed Central

    Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako

    2014-01-01

    Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739

  4. Estimation of scattering object characteristics for image reconstruction using a nonzero background.

    PubMed

    Jin, Jing; Astheimer, Jeffrey; Waag, Robert

    2010-06-01

    Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.

  5. A sentence sliding window approach to extract protein annotations from biomedical articles

    PubMed Central

    Krallinger, Martin; Padron, Maria; Valencia, Alfonso

    2005-01-01

    Background Within the emerging field of text mining and statistical natural language processing (NLP) applied to biomedical articles, a broad variety of techniques have been developed during the past years. Nevertheless, there is still a great ned of comparative assessment of the performance of the proposed methods and the development of common evaluation criteria. This issue was addressed by the Critical Assessment of Text Mining Methods in Molecular Biology (BioCreative) contest. The aim of this contest was to assess the performance of text mining systems applied to biomedical texts including tools which recognize named entities such as genes and proteins, and tools which automatically extract protein annotations. Results The "sentence sliding window" approach proposed here was found to efficiently extract text fragments from full text articles containing annotations on proteins, providing the highest number of correctly predicted annotations. Moreover, the number of correct extractions of individual entities (i.e. proteins and GO terms) involved in the relationships used for the annotations was significantly higher than the correct extractions of the complete annotations (protein-function relations). Conclusion We explored the use of averaging sentence sliding windows for information extraction, especially in a context where conventional training data is unavailable. The combination of our approach with more refined statistical estimators and machine learning techniques might be a way to improve annotation extraction for future biomedical text mining applications. PMID:15960831

  6. Inferior Pedicle Autoaugmentation Mastopexy After Breast Implant Removal

    PubMed Central

    Frey, Hans Peter; Hasse, Frank Michael; Hasselberg, Jens

    2010-01-01

    Background A new method of autoaugmentation mammaplasty is presented to correct ptosis and to increase the projection and volume of the breast in patients who would like a reposition augmentation mammaplasty after breast implant removal but do not want a new implant. Methods Between 1999 and 2007, a total of 27 patients (age = 54 ± 7.3 years) underwent mammaplasty using an inferior-based flap of deepithelialized subcutaneous and breast tissue modularized to its pedicle which was inserted beneath a superior pedicle used for correction of ptosis and to increase the projection and apparent volume of the breast. Results The results confirmed that autoaugmentation mammaplasty of the breast following removal of the implant yields longstanding results. It corrects ptosis and increases the projection and apparent volume of the breast when mastopexy is planned without use of a new implant. Twelve months after surgery the degree of descent of the inframammary fold generally parallels that of the nipple. The mean level of the inframammary fold was below the mean level of the nipple. Postoperatively, the optimum distance had been largely achieved. Conclusion The advantages of the technique presented here are that it minimizes the skin scar in cases using vertical mammaplasty techniques and optimizes the breast shape after breast implant removal in patients who do not want a new implant. PMID:20174800

  7. The usefulness of “corrected” body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data

    PubMed Central

    2014-01-01

    Background National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. Methods The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18–65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. Results All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23–28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. Conclusions If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association. PMID:24885210

  8. Improving MAVEN-IUVS Lyman-Alpha Apoapsis Images

    NASA Astrophysics Data System (ADS)

    Chaffin, M.; AlMannaei, A. S.; Jain, S.; Chaufray, J. Y.; Deighan, J.; Schneider, N. M.; Thiemann, E.; Mayyasi, M.; Clarke, J. T.; Crismani, M. M. J.; Stiepen, A.; Montmessin, F.; Epavier, F.; McClintock, B.; Stewart, I. F.; Holsclaw, G.; Jakosky, B. M.

    2017-12-01

    In 2013, the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission was launched to study the Martian upper atmosphere and ionosphere. MAVEN orbits through a very thin cloud of hydrogen gas, known as the hydrogen corona, that has been used to explore the planet's geologic evolution by detecting the loss of hydrogen from the atmosphere. Here we present various methods of extracting properties of the hydrogen corona from observations using MAVEN's Imaging Ultraviolet Spectograph (IUVS) instrument. The analysis presented here uses the IUVS Far Ultraviolet mode apoapase data. From apoapse, IUVS is able to obtain images of the hydrogen corona by detecting the Lyman-alpha airglow using a combination of instrument scan mirror and spacecraft motion. To complete one apoapse observation, eight scan swaths are performed to collect the observations and construct a coronal image. However, these images require further processing to account for the atmospheric MUV background that hinders the quality of the data. Here, we present new techniques for correcting instrument data. For the background subtraction, a multi-linear regression (MLR) routine of the first order MUV radiance was used to improve the images. A flat field correction was also applied by fitting a polynomial to periapse radiance observations. The apoapse data was re-binned using this fit.The results are presented as images to demonstrate the improvements in the data reduction. Implementing these methods for more orbits will improve our understanding of seasonal variability and H loss. Asymmetries in the Martian hydrogen corona can also be assessed to improve current model estimates of coronal H in the Martian atmosphere.

  9. Influence of the partial volume correction method on 18F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM

    PubMed Central

    Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian

    2014-01-01

    Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021

  10. Improving the outcome of infants born at <30 weeks' gestation - a randomized controlled trial of preventative care at home

    PubMed Central

    2009-01-01

    Background Early developmental interventions to prevent the high rate of neurodevelopmental problems in very preterm children, including cognitive, motor and behavioral impairments, are urgently needed. These interventions should be multi-faceted and include modules for caregivers given their high rates of mental health problems. Methods/Design We have designed a randomized controlled trial to assess the effectiveness of a preventative care program delivered at home over the first 12 months of life for infants born very preterm (<30 weeks of gestational age) and their families, compared with standard medical follow-up. The aim of the program, delivered over nine sessions by a team comprising a physiotherapist and psychologist, is to improve infant development (cognitive, motor and language), behavioral regulation, caregiver-child interactions and caregiver mental health at 24 months' corrected age. The infants will be stratified by severity of brain white matter injury (assessed by magnetic resonance imaging) at term equivalent age, and then randomized. At 12 months' corrected age interim outcome measures will include motor development assessed using the Alberta Infant Motor Scale and the Neurological Sensory Motor Developmental Assessment. Caregivers will also complete a questionnaire at this time to obtain information on behavior, parenting, caregiver mental health, and social support. The primary outcomes are at 24 months' corrected age and include cognitive, motor and language development assessed with the Bayley Scales of Infant and Toddler Development (Bayley-III). Secondary outcomes at 24 months include caregiver-child interaction measured using an observational task, and infant behavior, parenting, caregiver mental health and social support measured via standardized parental questionnaires. Discussion This paper presents the background, study design and protocol for a randomized controlled trial in very preterm infants utilizing a preventative care program in the first year after discharge home designed to improve cognitive, motor and behavioral outcomes of very preterm children and caregiver mental health at two-years' corrected age. Clinical Trial Registration Number ACTRN12605000492651 PMID:19954550

  11. Resolution of the COBE Earth sensor anomaly

    NASA Technical Reports Server (NTRS)

    Sedler, J.

    1993-01-01

    Since its launch on November 18, 1989, the Earth sensors on the Cosmic Background Explorer (COBE) have shown much greater noise than expected. The problem was traced to an error in Earth horizon acquisition-of-signal (AOS) times. Due to this error, the AOS timing correction was ignored, causing Earth sensor split-to-index (SI) angles to be incorrectly time-tagged to minor frame synchronization times. Resulting Earth sensor residuals, based on gyro-propagated fine attitude solutions, were as large as plus or minus 0.45 deg (much greater than plus or minus 0.10 deg from scanner specifications (Reference 1)). Also, discontinuities in single-frame coarse attitude pitch and roll angles (as large as 0.80 and 0.30 deg, respectively) were noted several times during each orbit. However, over the course of the mission, each Earth sensor was observed to independently and unexpectedly reset and then reactivate into a new configuration. Although the telemetered AOS timing corrections are still in error, a procedure has been developed to approximate and apply these corrections. This paper describes the approach, analysis, and results of approximating and applying AOS timing adjustments to correct Earth scanner data. Furthermore, due to the continuing degradation of COBE's gyroscopes, gyro-propagated fine attitude solutions may soon become unavailable, requiring an alternative method for attitude determination. By correcting Earth scanner AOS telemetry, as described in this paper, more accurate single-frame attitude solutions are obtained. All aforementioned pitch and roll discontinuities are removed. When proper AOS corrections are applied, the standard deviation of pitch residuals between coarse attitude and gyro-propagated fine attitude solutions decrease by a factor of 3. Also, the overall standard deviation of SI residuals from fine attitude solutions decrease by a factor of 4 (meeting sensor specifications) when AOS corrections are applied.

  12. Comparison of self-refraction using a simple device, USee, with manifest refraction in adults

    PubMed Central

    Annadanam, Anvesh; Mudie, Lucy I.; Liu, Alice; Plum, William G.; White, J. Kevin; Collins, Megan E.; Friedman, David S.

    2018-01-01

    Background The USee device is a new self-refraction tool that allows users to determine their own refractive error. We evaluated the ease of use of USee in adults, and compared the refractive error correction achieved with USee to clinical manifest refraction. Methods Sixty adults with uncorrected visual acuity <20/30 and spherical equivalent between –6.00 and +6.00 diopters completed manifest refraction and self-refraction. Results Subjects had a mean (±SD) age of 53.1 (±18.6) years, and 27 (45.0%) were male. Mean (±SD) spherical equivalent measured by manifest refraction and self-refraction were –0.90 D (±2.53) and –1.22 diopters (±2.42), respectively (p = 0.001). The proportion of subjects correctable to ≥20/30 in the better eye was higher for manifest refraction (96.7%) than self-refraction (83.3%, p = 0.005). Failure to achieve visual acuity ≥20/30 with self-refraction in right eyes was associated with increasing age (per year, OR: 1.05; 95% CI: 1.00–1.10) and higher cylindrical power (per diopter, OR: 7.26; 95% CI: 1.88–28.1). Subjectively, 95% of participants thought USee was easy to use, 85% thought self-refraction correction was better than being uncorrected, 57% thought vision with self-refraction correction was similar to their current corrective lenses, and 53% rated their vision as “very good” or “excellent” with self-refraction. Conclusion Self-refraction provides acceptable refractive error correction in the majority of adults. Programs targeting resource-poor settings could potentially use USee to provide easy on-site refractive error correction. PMID:29390026

  13. Is the NIHSS Certification Process Too Lenient?

    PubMed Central

    Hills, Nancy K.; Josephson, S. Andrew; Lyden, Patrick D.; Johnston, S. Claiborne

    2009-01-01

    Background and Purpose The National Institutes of Health Stroke Scale (NIHSS) is a widely used measure of neurological function in clinical trials and patient assessment; inter-rater scoring variability could impact communications and trial power. The manner in which the rater certification test is scored yields multiple correct answers that have changed over time. We examined the range of possible total NIHSS scores from answers given in certification tests by over 7,000 individual raters who were certified. Methods We analyzed the results of all raters who completed one of two standard multiple-patient videotaped certification examinations between 1998 and 2004. The range for the correct score, calculated using NIHSS ‘correct answers’, was determined for each patient. The distribution of scores derived from those who passed the certification test then was examined. Results A total of 6,268 raters scored 5 patients on Test 1; 1,240 scored 6 patients on Test 2. Using a National Stroke Association (NSA) answer key, we found that correct total scores ranged from 2 correct scores to as many as 12 different correct total scores. Among raters who achieved a passing score and were therefore qualified to administer the NIHSS, score distributions were even wider, with 1 certification patient receiving 18 different correct total scores. Conclusions Allowing multiple acceptable answers for questions on the NIHSS certification test introduces scoring variability. It seems reasonable to assume that the wider the range of acceptable answers in the certification test, the greater the variability in the performance of the test in trials and clinical practice by certified examiners. Greater consistency may be achieved by deriving a set of ‘best’ answers through expert consensus on all questions where this is possible, then teaching raters how to derive these answers using a required interactive training module. PMID:19295205

  14. WebArray: an online platform for microarray data analysis

    PubMed Central

    Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng

    2005-01-01

    Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165

  15. Efficient Methods to Assimilate Satellite Retrievals Based on Information Content. Part 2; Suboptimal Retrieval Assimilation

    NASA Technical Reports Server (NTRS)

    Joiner, J.; Dee, D. P.

    1998-01-01

    One of the outstanding problems in data assimilation has been and continues to be how best to utilize satellite data while balancing the tradeoff between accuracy and computational cost. A number of weather prediction centers have recently achieved remarkable success in improving their forecast skill by changing the method by which satellite data are assimilated into the forecast model from the traditional approach of assimilating retrievals to the direct assimilation of radiances in a variational framework. The operational implementation of such a substantial change in methodology involves a great number of technical details, e.g., pertaining to quality control procedures, systematic error correction techniques, and tuning of the statistical parameters in the analysis algorithm. Although there are clear theoretical advantages to the direct radiance assimilation approach, it is not obvious at all to what extent the improvements that have been obtained so far can be attributed to the change in methodology, or to various technical aspects of the implementation. The issue is of interest because retrieval assimilation retains many practical and logistical advantages which may become even more significant in the near future when increasingly high-volume data sources become available. The central question we address here is: how much improvement can we expect from assimilating radiances rather than retrievals, all other things being equal? We compare the two approaches in a simplified one-dimensional theoretical framework, in which problems related to quality control and systematic error correction are conveniently absent. By assuming a perfect radiative transfer model and perfect knowledge of radiance and background error covariances, we are able to formulate a nonlinear local error analysis for each assimilation method. Direct radiance assimilation is optimal in this idealized context, while the traditional method of assimilating retrievals is suboptimal because it ignores the cross-covariances between background errors and retrieval errors. We show that interactive retrieval assimilation (where the same background used for assimilation is also used in the retrieval step) is equivalent to direct assimilation of radiances with suboptimal analysis weights. We illustrate and extend these theoretical arguments with several one-dimensional assimilation experiments, where we estimate vertical atmospheric profiles using simulated data from both the High-resolution InfraRed Sounder 2 (HIRS2) and the future Atmospheric InfraRed Sounder (AIRS).

  16. PET/CT detectability and classification of simulated pulmonary lesions using an SUV correction scheme

    NASA Astrophysics Data System (ADS)

    Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven

    2008-03-01

    Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.

  17. Quantum corrections for spinning particles in de Sitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fröb, Markus B.; Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu

    We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalarmore » Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.« less

  18. caCORRECT2: Improving the accuracy and reliability of microarray data in the presence of artifacts

    PubMed Central

    2011-01-01

    Background In previous work, we reported the development of caCORRECT, a novel microarray quality control system built to identify and correct spatial artifacts commonly found on Affymetrix arrays. We have made recent improvements to caCORRECT, including the development of a model-based data-replacement strategy and integration with typical microarray workflows via caCORRECT's web portal and caBIG grid services. In this report, we demonstrate that caCORRECT improves the reproducibility and reliability of experimental results across several common Affymetrix microarray platforms. caCORRECT represents an advance over state-of-art quality control methods such as Harshlighting, and acts to improve gene expression calculation techniques such as PLIER, RMA and MAS5.0, because it incorporates spatial information into outlier detection as well as outlier information into probe normalization. The ability of caCORRECT to recover accurate gene expressions from low quality probe intensity data is assessed using a combination of real and synthetic artifacts with PCR follow-up confirmation and the affycomp spike in data. The caCORRECT tool can be accessed at the website: http://cacorrect.bme.gatech.edu. Results We demonstrate that (1) caCORRECT's artifact-aware normalization avoids the undesirable global data warping that happens when any damaged chips are processed without caCORRECT; (2) When used upstream of RMA, PLIER, or MAS5.0, the data imputation of caCORRECT generally improves the accuracy of microarray gene expression in the presence of artifacts more than using Harshlighting or not using any quality control; (3) Biomarkers selected from artifactual microarray data which have undergone the quality control procedures of caCORRECT are more likely to be reliable, as shown by both spike in and PCR validation experiments. Finally, we present a case study of the use of caCORRECT to reliably identify biomarkers for renal cell carcinoma, yielding two diagnostic biomarkers with potential clinical utility, PRKAB1 and NNMT. Conclusions caCORRECT is shown to improve the accuracy of gene expression, and the reproducibility of experimental results in clinical application. This study suggests that caCORRECT will be useful to clean up possible artifacts in new as well as archived microarray data. PMID:21957981

  19. Screening tests for assessing the anaerobic biodegradation of pollutant chemicals in subsurface environments

    USGS Publications Warehouse

    Suflita, Joseph M.; Concannon, Frank

    1995-01-01

    Screening methods were developed to assess the susceptibility of ground water contaminants to anaerobic biodegradation. One method was an extrapolation of a procedure previously used to measure biodegradation activity in dilute sewage sludge. Aquifer solids and ground water with no additional nutritive media were incubated anaerobically in 160-ml serum bottles containing 250 mg·l−1 carbon of the substrate of interest. This method relied on the detection of gas pressure or methane production in substrateamended microcosms relative to background controls. Other screening procedures involved the consumption of stoichiometrically required amounts of sulfate or nitrate from the same type of incubations. Close agreement was obtained between the measured and calculated amounts of substrate bioconversion based on the measured biogas pressure in methanogenic microcosms. Storage of the microcosms for up to 6 months did not adversely influence the onset or rate of benzoic acid mineralization. The lower detection limits of the methanogenic assay were found to be a function of the size of the microcosm headspace, the mean oxidation state of the substrate carbon, and the method used to correct for background temperature fluctuations. Using these simple screening procedures, biodegradation information of regulatory interest could be generated, including, (i) the length of the adaptation period, (ii) the rate of substrate decay and (iii) the completeness of the bioconversion.

  20. Functional leg length discrepancy between theories and reliable instrumental assessment: a study about newly invented NPoS system

    PubMed Central

    Mahmoud, Asmaa; Abundo, Paolo; Basile, Luisanna; Albensi, Caterina; Marasco, Morena; Bellizzi, Letizia; Galasso, Franco; Foti, Calogero

    2017-01-01

    Summary Background In spite the instinct social&financial impact of Leg Length Discrepancy (LLD), controversial and conflicting results still exist regarding a reliable assessment/correction method. For proper management it’s essential to discriminate between anatomical&functional Leg Length Discrepancy (FLLD). With the newly invented NPoS (New Postural Solution), under the umbrella of the collaboration of PRM Department, Tor Vergata University with Baro Postural Instruments srl, positive results were observed in both measuring& compensating the hemi-pelvic antero-medial rotation in FLLD through personalized bilateral heel raise using two NPoS components: Foot Image System (FIS) and Postural Optimizer System (POS). This led our research interest to test the validity of NPoS as a preliminary step before evaluating its implementations in postural disorders. Methods After clinical evaluation, 4 subjects with FLLD have been assessed by NPoS. Over a period of 2 months, every subject was evaluated 12 times by two different operators, 48 measurements in total, results have been verified in correlation to BTS GaitLab results. Results Intra-Operator&inter-operator variability analysis showed statistically insignificant differences, while inter-method variability between NPoS and BTS parameters expressed a linear correlation. Conclusion Results suggest a significant validity of NPoS in assessment&correction of FLLD, with high degree of reproducibility with minimal operator dependency. This can be considered a base for promising clinical implications of NPoS as a reliable cost effective postural assessment/corrective tool. Level of evidence V. PMID:29264341

  1. Comparison of 2 correction methods for absolute values of esophageal pressure in subjects with acute hypoxemic respiratory failure, mechanically ventilated in the ICU.

    PubMed

    Guérin, Claude; Richard, Jean-Christophe

    2012-12-01

    A recent trial showed that setting PEEP according to end-expiratory transpulmonary pressure (P(pl,ee)) in acute lung injury/acute respiratory distress syndrome (ALI/ARDS) might improve patient outcome. P(pl,ee) was obtained by subtracting the absolute value of esophageal pressure (P(es)) from airway pressure an invariant value of 5 cm H(2)O. The goal of the present study was to compare 2 methods for correcting absolute P(es) values in terms of resulting P(pl,ee) and recommended PEEP. Measurements collected prospectively from 42 subjects with various forms of acute hypoxemic respiratory failure receiving mechanical ventilation in ICU were analyzed. P(es) was measured at PEEP (P(es,ee)) and at relaxation volume of the respiratory system Vr (P(es,Vr)), obtained by allowing the subject to exhale into the atmosphere (zero PEEP). Two methods for correcting P(es) were compared: Talmor method (P(pl,ee,Talmor) = P(es,ee) - 5 cm H(2)O), and Vr method (P(es,ee,Vr) = P(es,ee) - P(es,Vr)). The rationale was that P(es,Vr) was a more physiologically based correction factor than an invariant value of 5 cm H(2)O applied to all subjects. Over the 42 subjects, median and interquartile range of P(es,ee) and P(es,Vr) were 11 (7-14) cm H(2)O and 8 (4-11) cm H(2)O, respectively. P(pl,ee,Talmor) was 6 (1-8) cm H(2)O, and P(es,ee,Vr) was 2 (1-5) cm H(2)O (P = .008). Two groups of subjects were defined, based on the difference between the 2 corrected values. In 28 subjects P(pl,ee,Talmor) was ≥ P(es,ee,Vr) (7 [5-9] cm H(2)O vs 2 [1-5] cm H(2)O, respectively), while in 14 subjects P(es,ee,Vr) was > P(pl,ee,Talmor) (2 [0-4] cm H(2)O vs -1 [-3 to 2] cm H(2)O, respectively). P(pl,ee,Vr) was significantly greater than P(pl,ee,Talmor) (7 [5-11] cm H(2)O vs 5 [2-7] cm H(2)O) in the former, and significantly lower in the latter (1 [-2 to 6] cm H(2)O vs 6 [4-9] cm H(2)O). Referring absolute P(es) values to Vr rather than to an invariant value would be better adapted to a patient's physiological background. Further studies are required to determine whether this correction method might improve patient outcome.

  2. Cosmic Strings Stabilized by Quantum Fluctuations

    NASA Astrophysics Data System (ADS)

    Weigel, H.

    2017-03-01

    Fermion quantum corrections to the energy of cosmic strings are computed. A number of rather technical tools are needed to formulate this correction, and isospin and gauge invariance are employed to verify consistency of these tools. These corrections must also be included when computing the energy of strings that are charged by populating fermion bound states in its background. It is found that charged strings are dynamically stabilized in theories similar to the standard model of particle physics.

  3. Observational constraints on loop quantum cosmology.

    PubMed

    Bojowald, Martin; Calcagni, Gianluca; Tsujikawa, Shinji

    2011-11-18

    In the inflationary scenario of loop quantum cosmology in the presence of inverse-volume corrections, we give analytic formulas for the power spectra of scalar and tensor perturbations convenient to compare with observations. Since inverse-volume corrections can provide strong contributions to the running spectral indices, inclusion of terms higher than the second-order runnings in the power spectra is crucially important. Using the recent data of cosmic microwave background and other cosmological experiments, we place bounds on the quantum corrections.

  4. Nanowire growth kinetics in aberration corrected environmental transmission electron microscopy

    DOE PAGES

    Chou, Yi -Chia; Panciera, Federico; Reuter, Mark C.; ...

    2016-03-15

    Here, we visualize atomic level dynamics during Si nanowire growth using aberration corrected environmental transmission electron microscopy, and compare with lower pressure results from ultra-high vacuum microscopy. We discuss the importance of higher pressure observations for understanding growth mechanisms and describe protocols to minimize effects of the higher pressure background gas.

  5. A National Survey of Mental Health Screening and Assessment Practices in Juvenile Correctional Facilities

    ERIC Educational Resources Information Center

    Swank, Jacqueline M.; Gagnon, Joseph C.

    2017-01-01

    Background: Mental health screening and assessment is crucial within juvenile correctional facilities (JC). However, limited information is available about the current screening and assessment procedures specifically within JC. Objective: The purpose of the current study was to obtain information about the mental health screening and assessment…

  6. Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.

    PubMed

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2017-05-26

    In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.

  7. Quantum gravitational contributions to the cosmic microwave background anisotropy spectrum.

    PubMed

    Kiefer, Claus; Krämer, Manuel

    2012-01-13

    We derive the primordial power spectrum of density fluctuations in the framework of quantum cosmology. For this purpose we perform a Born-Oppenheimer approximation to the Wheeler-DeWitt equation for an inflationary universe with a scalar field. In this way, we first recover the scale-invariant power spectrum that is found as an approximation in the simplest inflationary models. We then obtain quantum gravitational corrections to this spectrum and discuss whether they lead to measurable signatures in the cosmic microwave background anisotropy spectrum. The nonobservation so far of such corrections translates into an upper bound on the energy scale of inflation.

  8. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    PubMed Central

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-01-01

    Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144

  9. Acquisition and processing of data for isotope-ratio-monitoring mass spectrometry

    NASA Technical Reports Server (NTRS)

    Ricci, M. P.; Merritt, D. A.; Freeman, K. H.; Hayes, J. M.

    1994-01-01

    Methods are described for continuous monitoring of signals required for precise analyses of 13C, 18O, and 15N in gas streams containing varying quantities of CO2 and N2. The quantitative resolution (i.e. maximum performance in the absence of random errors) of these methods is adequate for determination of isotope ratios with an uncertainty of one part in 10(5); the precision actually obtained is often better than one part in 10(4). This report describes data-processing operations including definition of beginning and ending points of chromatographic peaks and quantitation of background levels, allowance for effects of chromatographic separation of isotopically substituted species, integration of signals related to specific masses, correction for effects of mass discrimination, recognition of drifts in mass spectrometer performance, and calculation of isotopic delta values. Characteristics of a system allowing off-line revision of parameters used in data reduction are described and an algorithm for identification of background levels in complex chromatograms is outlined. Effects of imperfect chromatographic resolution are demonstrated and discussed and an approach to deconvolution of signals from coeluting substances described.

  10. A Fundamental Study on Spectrum Center Estimation of Solar Spectral Irradiation by the Statistical Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Iijima, Aya; Suzuki, Kazumi; Wakao, Shinji; Kawasaki, Norihiro; Usami, Akira

    With a background of environmental problems and energy issues, it is expected that PV systems will be introduced rapidly and connected with power grids on a large scale in the future. For this reason, the concern to which PV power generation will affect supply and demand adjustment in electric power in the future arises and the technique of correctly grasping the PV power generation becomes increasingly important. The PV power generation depends on solar irradiance, temperature of a module and solar spectral irradiance. Solar spectral irradiance is distribution of the strength of the light for every wavelength. As the spectrum sensitivity of solar cell depends on kind of solar cell, it becomes important for exact grasp of PV power generation. Especially the preparation of solar spectral irradiance is, however, not easy because the observational instrument of solar spectral irradiance is expensive. With this background, in this paper, we propose a new method based on statistical pattern recognition for estimating the spectrum center which is representative index of solar spectral irradiance. Some numerical examples obtained by the proposed method are also presented.

  11. Decision support in psychiatry – a comparison between the diagnostic outcomes using a computerized decision support system versus manual diagnosis

    PubMed Central

    Bergman, Lars G; Fors, Uno GH

    2008-01-01

    Background Correct diagnosis in psychiatry may be improved by novel diagnostic procedures. Computerized Decision Support Systems (CDSS) are suggested to be able to improve diagnostic procedures, but some studies indicate possible problems. Therefore, it could be important to investigate CDSS systems with regard to their feasibility to improve diagnostic procedures as well as to save time. Methods This study was undertaken to compare the traditional 'paper and pencil' diagnostic method SCID1 with the computer-aided diagnostic system CB-SCID1 to ascertain processing time and accuracy of diagnoses suggested. 63 clinicians volunteered to participate in the study and to solve two paper-based cases using either a CDSS or manually. Results No major difference between paper and pencil and computer-supported diagnosis was found. Where a difference was found it was in favour of paper and pencil. For example, a significantly shorter time was found for paper and pencil for the difficult case, as compared to computer support. A significantly higher number of correct diagnoses were found in the diffilt case for the diagnosis 'Depression' using the paper and pencil method. Although a majority of the clinicians found the computer method supportive and easy to use, it took a longer time and yielded fewer correct diagnoses than with paper and pencil. Conclusion This study could not detect any major difference in diagnostic outcome between traditional paper and pencil methods and computer support for psychiatric diagnosis. Where there were significant differences, traditional paper and pencil methods were better than the tested CDSS and thus we conclude that CDSS for diagnostic procedures may interfere with diagnosis accuracy. A limitation was that most clinicians had not previously used the CDSS system under study. The results of this study, however, confirm that CDSS development for diagnostic purposes in psychiatry has much to deal with before it can be used for routine clinical purposes. PMID:18261222

  12. Saturated linkage map construction in Rubus idaeus using genotyping by sequencing and genome-independent imputation

    PubMed Central

    2013-01-01

    Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry). Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS) was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs) linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation distortion in R. idaeus, which may help to identify deleterious alleles that are the basis of inbreeding depression in the species. PMID:23324311

  13. HST/WFC3: Understanding and Mitigating Radiation Damage Effects in the CCD Detectors

    NASA Astrophysics Data System (ADS)

    Baggett, S.; Anderson, J.; Sosey, M.; MacKenty, J.; Gosmeyer, C.; Noeske, K.; Gunning, H.; Bourque, M.

    2015-09-01

    At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel resides a 4096x4096 pixel e2v CCD array. While these detectors are performing extremely well after more than 5 years in low-earth orbit, the cumulative effects of radiation damage cause a continual growth in the hot pixel population and a progressive loss in charge transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. Several mitigation options exist, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low background images for a relatively small noise penalty. Currently all WFC3 observers are encouraged to post-flash images with low backgrounds. Another powerful option in mitigating CTE losses is the pixel-based CTE correction. Analagous to the CTE correction software currently in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an empirical observationally-constrained model of how much charge is captured and released in order to reconstruct the image. Applied to images (with or without post-flash) after they are acquired, the software is currently available as a standalone routine. The correction will be incorporated into the standard WFC3 calibration pipeline.

  14. Corner detection and sorting method based on improved Harris algorithm in camera calibration

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang

    2016-11-01

    In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.

  15. Enhanced confinement in electron cyclotron resonance ion source plasma.

    PubMed

    Schachter, L; Stiebing, K E; Dobrescu, S

    2010-02-01

    Power loss by plasma-wall interactions may become a limitation for the performance of ECR and fusion plasma devices. Based on our research to optimize the performance of electron cyclotron resonance ion source (ECRIS) devices by the use of metal-dielectric (MD) structures, the development of the method presented here, allows to significantly improve the confinement of plasma electrons and hence to reduce losses. Dedicated measurements were performed at the Frankfurt 14 GHz ECRIS using argon and helium as working gas and high temperature resistive material for the MD structures. The analyzed charge state distributions and bremsstrahlung radiation spectra (corrected for background) also clearly verify the anticipated increase in the plasma-electron density and hence demonstrate the advantage by the MD-method.

  16. 49 CFR Appendix E to Part 227 - Use of Insert Earphones for Audiometric Testing

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. E Appendix.... B. Technicians who conduct audiometric tests must be trained to insert the earphones correctly into... audiometer. IV. Background Noise Levels Testing shall be conducted in a room where the background ambient...

  17. FAST Modularization Framework for Wind Turbine Simulation: Full-System Linearization

    DOE PAGES

    Jonkman, Jason M.; Jonkman, Bonnie J.

    2016-10-03

    The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well-established methods and tools for analyzing linear systems. Here, this paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.

  18. FAST modularization framework for wind turbine simulation: full-system linearization

    NASA Astrophysics Data System (ADS)

    Jonkman, J. M.; Jonkman, B. J.

    2016-09-01

    The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well- established methods and tools for analyzing linear systems. This paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.

  19. Mariner-Venus-Mercury optical navigation demonstration - Results and implications for future missions

    NASA Technical Reports Server (NTRS)

    Acton, C. H., Jr.; Ohtakay, H.

    1975-01-01

    Optical navigation uses spacecraft television pictures of a target body against a known star background in a process which relates the spacecraft trajectory to the target body. This technology was used in the Mariner-Venus-Mercury mission, with the optical data processed in near-real-time, simulating a mission critical environment. Optical data error sources were identified, and a star location error analysis was carried out. Several methods for selecting limb crossing coordinates were used, and a limb smear compensation was introduced. Omission of planetary aberration corrections was the source of large optical residuals.

  20. A method to correct coordinate distortion in EBSD maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y.B., E-mail: yubz@dtu.dk; Elbrønd, A.; Lin, F.X.

    2014-10-15

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. -more » Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction.« less

  1. Aberration corrections for free-space optical communications in atmosphere turbulence using orbital angular momentum states.

    PubMed

    Zhao, S M; Leach, J; Gong, L Y; Ding, J; Zheng, B Y

    2012-01-02

    The effect of atmosphere turbulence on light's spatial structure compromises the information capacity of photons carrying the Orbital Angular Momentum (OAM) in free-space optical (FSO) communications. In this paper, we study two aberration correction methods to mitigate this effect. The first one is the Shack-Hartmann wavefront correction method, which is based on the Zernike polynomials, and the second is a phase correction method specific to OAM states. Our numerical results show that the phase correction method for OAM states outperforms the Shark-Hartmann wavefront correction method, although both methods improve significantly purity of a single OAM state and the channel capacities of FSO communication link. At the same time, our experimental results show that the values of participation functions go down at the phase correction method for OAM states, i.e., the correction method ameliorates effectively the bad effect of atmosphere turbulence.

  2. Assessing population genetic structure via the maximisation of genetic distance

    PubMed Central

    2009-01-01

    Background The inference of the hidden structure of a population is an essential issue in population genetics. Recently, several methods have been proposed to infer population structure in population genetics. Methods In this study, a new method to infer the number of clusters and to assign individuals to the inferred populations is proposed. This approach does not make any assumption on Hardy-Weinberg and linkage equilibrium. The implemented criterion is the maximisation (via a simulated annealing algorithm) of the averaged genetic distance between a predefined number of clusters. The performance of this method is compared with two Bayesian approaches: STRUCTURE and BAPS, using simulated data and also a real human data set. Results The simulations show that with a reduced number of markers, BAPS overestimates the number of clusters and presents a reduced proportion of correct groupings. The accuracy of the new method is approximately the same as for STRUCTURE. Also, in Hardy-Weinberg and linkage disequilibrium cases, BAPS performs incorrectly. In these situations, STRUCTURE and the new method show an equivalent behaviour with respect to the number of inferred clusters, although the proportion of correct groupings is slightly better with the new method. Re-establishing equilibrium with the randomisation procedures improves the precision of the Bayesian approaches. All methods have a good precision for FST ≥ 0.03, but only STRUCTURE estimates the correct number of clusters for FST as low as 0.01. In situations with a high number of clusters or a more complex population structure, MGD performs better than STRUCTURE and BAPS. The results for a human data set analysed with the new method are congruent with the geographical regions previously found. Conclusion This new method used to infer the hidden structure in a population, based on the maximisation of the genetic distance and not taking into consideration any assumption about Hardy-Weinberg and linkage equilibrium, performs well under different simulated scenarios and with real data. Therefore, it could be a useful tool to determine genetically homogeneous groups, especially in those situations where the number of clusters is high, with complex population structure and where Hardy-Weinberg and/or linkage equilibrium are present. PMID:19900278

  3. Safety of telephone triage in general practitioner cooperatives: do triage nurses correctly estimate urgency?

    PubMed Central

    Giesen, Paul; Ferwerda, Rosa; Tijssen, Roelie; Mokkink, Henk; Drijver, Roeland; van den Bosch, Wil; Grol, Richard

    2007-01-01

    Background In recent years, there has been a growth in the use of triage nurses to decrease general practitioner (GP) workloads and increase the efficiency of telephone triage. The actual safety of decisions made by triage nurses has not yet been assessed. Objectives To investigate whether triage nurses accurately estimate the urgency level of health complaints when using the national telephone guidelines, and to examine the relationship between the performance of triage nurses and their education and training. Method A cross‐sectional, multicentre, observational study employing five mystery (simulated) patients who telephoned triage nurses in four GP cooperatives. The mystery patients played standardised roles. Each role had one of four urgency levels as determined by experts. The triage nurses called were asked to estimate the level of urgency after the contact. This level of urgency was compared with a gold standard. Results Triage nurses estimated the level of urgency of 69% of the 352 contacts correctly and underestimated the level of urgency of 19% of the contacts. The sensitivity and specificity of the urgency estimates provided by the triage nurses were found to be 0.76 and 0.95, respectively. The positive and negative predictive values of the urgency estimates were 0.83 and 0.93, respectively. A significant correlation was found between correct estimation of urgency and specific training on the use of the guidelines. The educational background (primary or secondary care) of the nurses had no significant relationship with the rate of underestimation. Conclusion Telephone triage by triage nurses is efficient but possibly not safe, with potentially severe consequences for the patient. An educational programme for triage nurses is recommended. Also, a direct second safety check of all cases by a specially trained GP telephone doctor is advisable. PMID:17545343

  4. Far Infrared Spectrometry of the Cosmic Background Radiation

    DOE R&D Accomplishments Database

    Mather, J. C.

    1974-01-01

    I describe two experiments to measure the cosmic background radiation near 1 mm wavelength. The first was a ground-based search for spectral lines, made with a Fabry-Perot interferometer and an InSb detector. The second is a measurement of the spectrum from 3 to 18 cm{sup -1}, made with a balloon-borne Fourier transform spectrometer. It is a polarizing Michelson interferometer, cooled in liquid helium, and operated with a germanium bolometer. I give the theory of operation, construction details, and experimental results. The first experiment was successfully completed but the second suffered equipment malfunction on its first flight. I describe the theory of Fourier transformations and give a new understanding of convolutional phase correction computations. I discuss for infrared bolometer calibration procedures, and tabulate test results on nine detectors. I describe methods of improving bolometer sensitivity with immersion optics and with conductive film blackening.

  5. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View

    PubMed Central

    2016-01-01

    Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644

  6. Analysis of in vivo correction of defined mismatches in the DNA mismatch repair mutants msh2, msh3 and msh6 of Saccharomyces cerevisiae.

    PubMed

    Lühr, B; Scheller, J; Meyer, P; Kramer, W

    1998-02-01

    We have analysed the correction of defined mismatches in wild-type and msh2, msh3, msh6 and msh3 msh6 mutants of Saccharomyces cerevisiae in two different yeast strain backgrounds by transformation with plasmid heteroduplex DNA constructs. Ten different base/base mismatches, two single-nucleotide loops and a 38-nucleotide loop were tested. Repair of all types of mismatches was severely impaired in msh2 and msh3 msh6 mutants. In msh6 mutants, repair efficiency of most base/base mismatches was reduced to a similar extent as in msh3 msh6 double mutants. G/T and A/C mismatches, however, displayed residual repair in msh6 mutants in one strain background, implying a role for Msh3p in recognition of base/base mismatches. Furthermore, the efficiency of repair of base/base mismatches was considerably reduced in msh3 mutants in one strain background, indicating a requirement for MSH3 for fully efficient mismatch correction. Also the efficiency of repair of the 38-nucleotide loop was reduced in msh3 mutants, and to a lesser extent in msh6 mutants. The single-nucleotide loop with an unpaired A was less efficiently repaired in msh3 mutants and that with an unpaired T was less efficiently corrected in msh6 mutants, indicating non-redundant functions for the two proteins in the recognition of single-nucleotide loops.

  7. Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.

    PubMed

    Mehranian, Abolfazl; Zaidi, Habib

    2015-04-01

    Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  8. A velocity-correction projection method based immersed boundary method for incompressible flows

    NASA Astrophysics Data System (ADS)

    Cai, Shanggui

    2014-11-01

    In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.

  9. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons

    PubMed Central

    2014-01-01

    Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532

  10. Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI.

    PubMed

    Heußer, Thorsten; Mann, Philipp; Rank, Christopher M; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc; Freitag, Martin T

    2017-01-01

    Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient's diagnosis. Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts.

  11. Finite temperature corrections to tachyon mass in intersecting D-branes

    NASA Astrophysics Data System (ADS)

    Sethi, Varun; Chowdhury, Sudipto Paul; Sarkar, Swarnendu

    2017-04-01

    We continue with the analysis of finite temperature corrections to the Tachyon mass in intersecting branes which was initiated in [1]. In this paper we extend the computation to the case of intersecting D3 branes by considering a setup of two intersecting branes in flat-space background. A holographic model dual to BCS superconductor consisting of intersecting D8 branes in D4 brane background was proposed in [2]. The background considered here is a simplified configuration of this dual model. We compute the one-loop Tachyon amplitude in the Yang-Mills approximation and show that the result is finite. Analyzing the amplitudes further we numerically compute the transition temperature at which the Tachyon becomes massless. The analytic expressions for the one-loop amplitudes obtained here reduce to those for intersecting D1 branes obtained in [1] as well as those for intersecting D2 branes.

  12. Effect of an Ergonomics-Based Educational Intervention Based on Transtheoretical Model in Adopting Correct Body Posture Among Operating Room Nurses

    PubMed Central

    Moazzami, Zeinab; Dehdari, Tahere; Taghdisi, Mohammad Hosein; Soltanian, Alireza

    2016-01-01

    Background: One of the preventive strategies for chronic low back pain among operating room nurses is instructing proper body mechanics and postural behavior, for which the use of the Transtheoretical Model (TTM) has been recommended. Methods: Eighty two nurses who were in the contemplation and preparation stages for adopting correct body posture were randomly selected (control group = 40, intervention group = 42). TTM variables and body posture were measured at baseline and again after 1 and 6 months after the intervention. A four-week ergonomics educational intervention based on TTM variables was designed and conducted for the nurses in the intervention group. Results: Following the intervention, a higher proportion of nurses in the intervention group moved into the action stage (p < 0.05). Mean scores of self-efficacy, pros, experimental processes and correct body posture were also significantly higher in the intervention group (p < 0.05). No significant differences were found in the cons and behavioral processes, except for self-liberation, between the two groups (p > 0.05) after the intervention. Conclusions: The TTM provides a suitable framework for developing stage-based ergonomics interventions for postural behavior. PMID:26925897

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiller, Britta

    Higher order QCD corrections to W and Z boson production do not only manifest themselves in the generation of high transverse momenta of the weak bosons, but these QCD effects become directly visible in the production of jets in association with the weak bosons. Studying these processes is not only interesting from the perspective of testing perturbative QCD, but also to constrain a major background to many Standard Model (SM) of non-SM physics signals, e.g. top pair and single top production, searchers for the Higgs boson, leptoquarks and supersymmetric particles. This thesis describes a measurement of Z/γ* + jets production in pmore » $$\\bar{p}$$ collisions at √s = 1.96 TeV in the decay channel Z/γ* → μ +μ -. An integrated luminosity of L≈ 1fb -1 collected by the D0 detector at the Tevatron between August 2002 and February 2006 has been used. Differential production cross sections as function of the transverse energy of the first, second and third leading jet are measured. The distributions are corrected for acceptance and migration effects back to hadron level using an iterative unfolding method. Comparison of the measured cross sections to event generators, which include part of the higher order corrections are presented.« less

  14. Surgical Treatment of Angular Pott’s Kyphosis with Posterior Approach, Pedicular Wedge Osteotomy and Canal Widening

    PubMed Central

    Kinkpe, CV; Onimus, M; Sarr, L; Niane, MM; Traore, MM; Daffe, M; Gueye, AB

    2017-01-01

    Background: It has been observed that the correction of severe posttuberculous angular kyphosis is still a challenge, mainly because of the neurologic risk. Methods: Nine patients were reviewed after surgery (mean follow-up 18 months). There were 2 thoracic, 4 thoraco-lumbar and 3 lumbar kyphosis. The mean age at surgery was 23. Clinical results were evaluated by the Oswestry Disability Index (ODI) and by the neurologic evaluation. Preoperative, postoperative and final follow-up X-rays were assessed. The surgery included a posterior approach with cord release and correction by transpedicular wedge osteotomy and widening of the spinal canal. Results: Average kyphotic angulation was 72° before surgery, 10° after surgery and 12° at follow-up. Three out of four patients with neural deficit showed improvement. Neurologic complications included a transitory quadriceps paralysis, likely by foraminal compression of the root. Conclusion: A posterior transpedicular wedge osteotomy allows a substantial correction of the kyphosis, more by deflexion than by elongation, with limited neurologic risks. However it is mandatory to widely enlarge the spinal canal on the levels adjacent to the osteotomy, in order to allow the dura to expand backwards. PMID:28567156

  15. No association between oxytocin or prolactin gene variants and childhood-onset mood disorders

    PubMed Central

    Strauss, John S.; Freeman, Natalie L.; Shaikh, Sajid A.; Vetró, Ágnes; Kiss, Enikő; Kapornai, Krisztina; Daróczi, Gabriella; Rimay, Timea; Kothencné, Viola Osváth; Dombovári, Edit; Kaczvinszk, Emília; Tamás, Zsuzsa; Baji, Ildikó; Besny, Márta; Gádoros, Julia; DeLuca, Vincenzo; George, Charles J.; Dempster, Emma; Barr, Cathy L.; Kovacs, Maria; Kennedy, James L.

    2010-01-01

    Background Oxytocin (OXT) and prolactin (PRL) are neuropeptide hormones that interact with the serotonin system and are involved in the stress response and social affiliation. In human studies, serum OXT and PRL levels have been associated with depression and related phenotypes. Our purpose was to determine if single nucleotide polymorphisms (SNPs) at the loci for OXT, PRL and their receptors, OXTR and PRLR, were associated with childhood-onset mood disorders (COMD). Methods Using 678 families in a family-based association design, we genotyped sixteen SNPs at OXT, PRL, OXTR and PRLR to test for association with COMD. Results No significant associations were found for SNPs in the OXTR, PRL, or PRLR genes. Two of three SNPs 3' of the OXT gene were associated with COMD (p ≤ 0.02), significant after spectral decomposition, but were not significant after additionally correcting for the number of genes tested. Supplementary analyses of parent-of-origin and proband sex effects for OXT SNPs by Fisher’s Exact test were not significant after Bonferroni correction. Conclusions We have examined sixteen OXT and PRL system gene variants, with no evidence of statistically significant association after correction for multiple tests. PMID:20547007

  16. Completely automated open-path FT-IR spectrometry.

    PubMed

    Griffiths, Peter R; Shao, Limin; Leytem, April B

    2009-01-01

    Atmospheric analysis by open-path Fourier-transform infrared (OP/FT-IR) spectrometry has been possible for over two decades but has not been widely used because of the limitations of the software of commercial instruments. In this paper, we describe the current state-of-the-art of the hardware and software that constitutes a contemporary OP/FT-IR spectrometer. We then describe advances that have been made in our laboratory that have enabled many of the limitations of this type of instrument to be overcome. These include not having to acquire a single-beam background spectrum that compensates for absorption features in the spectra of atmospheric water vapor and carbon dioxide. Instead, an easily measured "short path-length" background spectrum is used for calculation of each absorbance spectrum that is measured over a long path-length. To accomplish this goal, the algorithm used to calculate the concentrations of trace atmospheric molecules was changed from classical least-squares regression (CLS) to partial least-squares regression (PLS). For calibration, OP/FT-IR spectra are measured in pristine air over a wide variety of path-lengths, temperatures, and humidities, ratioed against a short-path background, and converted to absorbance; the reference spectrum of each analyte is then multiplied by randomly selected coefficients and added to these background spectra. Automatic baseline correction for small molecules with resolved rotational fine structure, such as ammonia and methane, is effected using wavelet transforms. A novel method of correcting for the effect of the nonlinear response of mercury cadmium telluride detectors is also incorporated. Finally, target factor analysis may be used to detect the onset of a given pollutant when its concentration exceeds a certain threshold. In this way, the concentration of atmospheric species has been obtained from OP/FT-IR spectra measured at intervals of 1 min over a period of many hours with no operator intervention.

  17. RE: Request for Correction, Technical Support Document, Greenhouse Gas Emissions Reporting from the Petroleum and Natural Gas Industry

    EPA Pesticide Factsheets

    The Industrial Energy Consumers of America (IECA) joins the U.S. Chamber of Commerce in its request for correction of information developed by the Environmental Protection Agency (EPA) in a background technical support document titled Greenhouse Gas Emissions Reporting from the Petroleum and Natural Gas Industry

  18. 75 FR 44901 - Extended Carryback of Losses to or From a Consolidated Group; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-30

    .... 7805 * * * 0 Par. 2. Section 1.1502-21T(b)(3)(v) is amended by revising paragraphs (B), (C)(1), (C)(2...: Grid Glyer, (202) 622-7930 (not a toll-free number). SUPPLEMENTARY INFORMATION: Background The final... in 26 CFR Part 1 Income taxes, Reporting and recordkeeping requirements. Correction of Publication 0...

  19. 75 FR 38979 - Endangered and Threatened Species; Initiation of a 5-Year Review of the Eastern Distinct...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-07

    ... correction is effective July 7, 2010. FOR FURTHER INFORMATION CONTACT: Dr. Lisa Rotterman (907-271-1692), lisa[email protected] . SUPPLEMENTARY INFORMATION: Background On June 29, 2010, NMFS published a... lion (75 FR 37385). NMFS inadvertently gave incorrect e-mail and fax information. The correct email is...

  20. "As Real as It Gets": A Grounded Theory Study of a Reading Intervention in A Juvenile Correctional School

    ERIC Educational Resources Information Center

    McCray, Erica D.; Ribuffo, Cecelia; Lane, Holly; Murphy, Kristin M.; Gagnon, Joseph C.; Houchins, David E.; Lambert, Richard G.

    2018-01-01

    Background: The well-documented statistics regarding the academic struggles of incarcerated youth are disconcerting, and efforts to improve reading performance among this population are greatly needed. There is a dearth of research that provides rich and detailed accounts of reading intervention implementation in the juvenile corrections setting.…

  1. Monitoring of left ventricular ejection fraction with a miniature, nonimaging nuclear detector: accuracy and reliability over time with special reference to blood labeling.

    PubMed

    Lindhardt, T B; Hesse, B; Gadsbøll, N

    1997-01-01

    The purpose of this study was to determine the accuracy of determinations of left ventricular ejection fraction (LVEF) by a nonimaging miniature nuclear detector system (Cardioscint) and to evaluate the feasibility of long-term LVEF monitoring in patients admitted to the coronary care unit, with special reference to the blood-labeling technique. Cardioscint LVEF values were compared with measurements of LVEF by conventional gamma camera radionuclide ventriculography in 33 patients with a wide range of LVEF values. In 21 of the 33 patients, long-term monitoring was carried out for 1 to 4 hours (mean 186 minutes), with three different kits: one for in vivo and two for in vitro red blood cell labeling. The stability of the labeling was assessed by determination of the activity of blood samples taken during the first 24 hours after blood labeling. The agreement between Cardioscint LVEF and gamma camera LVEF was good with automatic background correction (r = 0.82; regression equation y = 1.04x + 3.88) but poor with manual background correction (r = 0.50; y = 0.88x - 0.55). The agreement was highest in patients without wall motion abnormalities. The long-term monitoring showed no difference between morning and afternoon Cardioscint LVEF values. Short-lasting fluctuations in LVEFs greater than 10 EF units were observed in the majority of the patients. After 24 hours, the mean reduction in the physical decay-corrected count rate of the blood samples was most pronounced for the two in vitro blood-labeling kits (57% +/- 9% and 41% +/- 3%) and less for the in vivo blood-labeling kit (32% +/- 26%). This "biologic decay" had a marked influence on the Cardioscint monitoring results, demanding frequent background correction. A fairly accurate estimate of LVEF can be obtained with the nonimaging Cardioscint system, and continuous bedside LVEF monitoring can proceed for hours with little inconvenience to the patients. Instability of the red blood cell labeling during long-term monitoring necessitates frequent background correction.

  2. Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Tian, Xin; Pan, Le-chun

    2014-07-01

    Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.

  3. A new corrective technique for adolescent idiopathic scoliosis: convex manipulation using 6.35 mm diameter pure titanium rod followed by concave fixation using 6.35 mm diameter titanium alloy

    PubMed Central

    2015-01-01

    Background It has been thought that corrective posterior surgery for adolescent idiopathic scoliosis (AIS) should be started on the concave side because initial convex manipulation would increase the risk of vertebral malrotation, worsening the rib hump. With the many new materials, implants, and manipulation techniques (e.g., direct vertebral rotation) now available, we hypothesized that manipulating the convex side first is no longer taboo. Methods Our technique has two major facets. (1) Curve correction is started from the convex side with a derotation maneuver and in situ bending followed by concave rod application. (2) A 6.35 mm diameter pure titanium rod is used on the convex side and a 6.35 mm diameter titanium alloy rod on the concave side. Altogether, 52 patients were divided into two groups. Group N included 40 patients (3 male, 37 female; average age 15.9 years) of Lenke type 1 (23 patients), type 2 (2), type 3 (3), type 5 (10), type 6 (2). They were treated with a new technique using 6.35 mm diameter different-stiffness titanium rods. Group C included 12 patients (all female, average age 18.8 years) of Lenke type 1 (6 patients), type 2 (3), type 3 (1), type 5 (1), type 6 (1). They were treated with conventional methods using 5.5 mm diameter titanium alloy rods. Radiographic parameters (Cobb angle/thoracic kyphosis/correction rates) and perioperative data were retrospectively collected and analyzed. Results Preoperative main Cobb angles (groups N/C) were 56.8°/60.0°, which had improved to 15.2°/17.1° at the latest follow-up. Thoracic kyphosis increased from 16.8° to 21.3° in group N and from 16.0° to 23.4° in group C. Correction rates were 73.2% in group N and 71.7% in group C. There were no significant differences for either parameter. Mean operating time, however, was significantly shorter in group N (364 min) than in group C (456 min). Conclusion We developed a new corrective surgical technique for AIS using a 6.35 mm diameter pure titanium rod initially on the convex side. Correction rates in the coronal, sagittal, and axial planes were the same as those achieved with conventional methods, but the operation time was significantly shorter. PMID:25815053

  4. Evaluation of the malaria rapid diagnostic test VIKIA malaria Ag Pf/Pan™ in endemic and non-endemic settings

    PubMed Central

    2013-01-01

    Background Malaria rapid diagnostic tests (RDTs) are a useful tool in endemic malaria countries, where light microscopy is not feasible. In non-endemic countries they can be used as complementary tests to provide timely results in case of microscopy inexperience. This study aims to compare the new VIKIA Malaria Ag Pf/Pan™ RDT with PCR-corrected microscopy results and the commonly used CareStart™ RDT to diagnose falciparum and non-falciparum malaria in the endemic setting of Bamako, Mali and the non-endemic setting of Lyon, France. Methods Blood samples were collected during a 12-months and six-months period in 2011 from patients suspected to have malaria in Lyon and Bamako respectively. The samples were examined by light microscopy, the VIKIA Malaria Ag Pf/Pan™ test and in Bamako additionally with the CareStart™ RDT. Discordant results were corrected by real-time PCR. Sensitivity, specificity, positive predictive value and negative predictive value were used to evaluate test performance. Results Samples of 877 patients from both sites were included. The VIKIA Malaria Ag Pf/Pan™ had a sensitivity of 98% and 96% for Plasmodium falciparum in Lyon and Bamako, respectively, performing similar to PCR-corrected microscopy. Conclusions The VIKIA Malaria Ag Pf/Pan™ performs similar to PCR-corrected microscopy for the detection of P. falciparum, making it a valuable tool in malaria endemic and non-endemic regions. PMID:23742633

  5. Contact lens rehabilitation following repaired corneal perforations

    PubMed Central

    Titiyal, Jeewan S; Sinha, Rajesh; Sharma, Namrata; Sreenivas, V; Vajpayee, Rasik B

    2006-01-01

    Background Visual outcome following repair of post-traumatic corneal perforation may not be optimal due to presence of irregular keratometric astigmatism. We performed a study to evaluate and compare rigid gas permeable contact lens and spectacles in visual rehabilitation following perforating corneal injuries. Method Eyes that had undergone repair for corneal perforating injuries with or without lens aspiration were fitted rigid gas permeable contact lenses. The fitting pattern and the improvement in visual acuity by contact lens over spectacle correction were noted. Results Forty eyes of 40 patients that had undergone surgical repair of posttraumatic corneal perforations were fitted rigid gas permeable contact lenses for visual rehabilitation. Twenty-four eyes (60%) required aphakic contact lenses. The best corrected visual acuity (BCVA) of ≥ 6/18 in the snellen's acuity chart was seen in 10 (25%) eyes with spectacle correction and 37 (92.5%) eyes with the use of contact lens (p < 0.001). The best-corrected visual acuity with spectacles was 0.20 ± 0.13 while the same with contact lens was 0.58 ± 0.26. All the patients showed an improvement of ≥ 2 lines over spectacles in the snellen's acuity chart with contact lens. Conclusion Rigid gas permeable contact lenses are better means of rehabilitation in eyes that have an irregular cornea due to scars caused by perforating corneal injuries. PMID:16536877

  6. Corrected Cephalometric Analysis to Determine the Distance and Vector of Distraction Osteogenesis for Syndromic Craniosynostosis

    PubMed Central

    Fukawa, Toshihiko; Hirakawa, Takashi; Satake, Toshihiko; Maegawa, Jiro

    2017-01-01

    Background: The purpose of this study was to confirm the utility of a corrected cephalometric analysis to facilitate the planning of distraction osteogenesis with Le Fort III osteotomy for syndromic craniosynostosis. Methods: This prospective study involved 4 male and 2 female patients (mean patient age, 8 years 9 months; age range, 4 years 6 months to 13 years 2 months) with Crouzon syndrome who were treated with Le Fort III maxillary distraction using our previously described system of analysis of a corrected cephalogram and who underwent clinical follow-up. Lateral cephalograms were obtained immediately after device removal. Results: Distraction of orbitale moved the vector downward to the adult profile, but there was slightly less elongation than the adult profile for the distraction distance. The desired and real mean angles after distraction of point A were 29.2 ± 7.9° and 6.1 ± 8.5°, respectively, and the desired and the real mean distances after distraction of point A were 30.6 ± 12.7 mm and 29.4 ± 4.1 mm, respectively. Conclusions: Using the corrected cephalometric analysis, the distance and vector of distraction osteogenesis with Le Fort III osteotomy could be determined in patients with syndromic craniosynostosis. The distraction system brought the patients' facial bones to the planned position using controlling devices. PMID:29062650

  7. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  8. Adult Smokers' Responses to “Corrective Statements” Regarding Tobacco Industry Deception

    PubMed Central

    Kollath-Cattano, Christy L.; Abad-Vivero, Erika N.; Thrasher, James F.; Bansal-Travers, Maansi; O'Connor, Richard J.; Krugman, Dean M.; Berg, Carla J.; Hardin, James W.

    2014-01-01

    Background To inform consumers, U.S. Federal Courts have ordered the tobacco industry to disseminate “corrective statements” (CSs) about their deception regarding five topics: smoker health effects, nonsmoker health effects, cigarette addictiveness, design of cigarettes to increase addiction, and relative safety of light cigarettes. Purpose To determine how smokers from diverse backgrounds respond to the final, court-mandated wording of these CSs. Methods Data were analyzed from an online consumer panel of 1,404 adult smokers who evaluated one of five CS topics (n=280–281) by reporting novelty, relevance, anger at the industry, and motivation to quit because of the CS. Logistic and linear regression models assessed main and interactive effects of race/ethnicity, gender, education, and CS topic on these responses. Data were collected in January 2013 and analyzed in March 2013. Results Thirty percent to 54% of participants reported that each CS provided novel information, and novelty was associated with greater relevance, anger at the industry, and motivation to quit because of the message. African Americans and Latinos were more likely than non-Hispanic whites to report that CSs were novel, and they had stronger responses to CSs across all indicators. Compared to men, women reported that CSs were more relevant and motivated them to quit. Conclusions This study suggests that smokers would value and respond to CSs, particularly smokers from groups that suffer from tobacco–related health disparities. PMID:24746372

  9. Scripts for TRUMP data analyses. Part II (HLA-related data): statistical analyses specific for hematopoietic stem cell transplantation.

    PubMed

    Kanda, Junya

    2016-01-01

    The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.

  10. Femoral Reconstruction Using External Fixation

    PubMed Central

    Palatnik, Yevgeniy; Rozbruch, S. Robert

    2011-01-01

    Background. The use of an external fixator for the purpose of distraction osteogenesis has been applied to a wide range of orthopedic problems caused by such diverse etiologies as congenital disease, metabolic conditions, infections, traumatic injuries, and congenital short stature. The purpose of this study was to analyze our experience of utilizing this method in patients undergoing a variety of orthopedic procedures of the femur. Methods. We retrospectively reviewed our experience of using external fixation for femoral reconstruction. Three subgroups were defined based on the primary reconstruction goal lengthening, deformity correction, and repair of nonunion/bone defect. Factors such as leg length discrepancy (LLD), limb alignment, and external fixation time and complications were evaluated for the entire group and the 3 subgroups. Results. There was substantial improvement in the overall LLD, femoral length discrepancy, and limb alignment as measured by mechanical axis deviation (MAD) and lateral distal femoral angle (LDFA) for the entire group as well as the subgroups. Conclusions. The Ilizarov external fixator allows for decreased surgical exposure and preservation of blood supply to bone, avoidance of bone grafting and internal fixation, and simultaneous lengthening and deformity correction, making it a very useful technique for femoral reconstruction. PMID:21991425

  11. Comparison of Various Equations for Estimating GFR in Malawi: How to Determine Renal Function in Resource Limited Settings?

    PubMed Central

    Phiri, Sam; Rothenbacher, Dietrich; Neuhann, Florian

    2015-01-01

    Background Chronic kidney disease (CKD) is a probably underrated public health problem in Sub-Saharan-Africa, in particular in combination with HIV-infection. Knowledge about the CKD prevalence is scarce and in the available literature different methods to classify CKD are used impeding comparison and general prevalence estimates. Methods This study assessed different serum-creatinine based equations for glomerular filtration rates (eGFR) and compared them to a cystatin C based equation. The study was conducted in Lilongwe, Malawi enrolling a population of 363 adults of which 32% were HIV-positive. Results Comparison of formulae based on Bland-Altman-plots and accuracy revealed best performance for the CKD-EPI equation without the correction factor for black Americans. Analyzing the differences between HIV-positive and –negative individuals CKD-EPI systematically overestimated eGFR in comparison to cystatin C and therefore lead to underestimation of CKD in HIV-positives. Conclusions Our findings underline the importance for standardization of eGFR calculation in a Sub-Saharan African setting, to further investigate the differences with regard to HIV status and to develop potential correction factors as established for age and sex. PMID:26083345

  12. Occupations at Case Closure for Vocational Rehabilitation Applicants with Criminal Backgrounds

    ERIC Educational Resources Information Center

    Whitfield, Harold Wayne

    2009-01-01

    The purpose of this study was to identify industries that hire persons with disabilities and criminal backgrounds. The researcher obtained data on 1,355 applicants for vocational rehabilitation services who were living in adult correctional facilities at the time of application. Service-based industries hired the most ex-inmates with disabilities…

  13. Graviton propagator from background-independent quantum gravity.

    PubMed

    Rovelli, Carlo

    2006-10-13

    We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.

  14. An evaluation of random analysis methods for the determination of panel damping

    NASA Technical Reports Server (NTRS)

    Bhat, W. V.; Wilby, J. F.

    1972-01-01

    An analysis is made of steady-state and non-steady-state methods for the measurement of panel damping. Particular emphasis is placed on the use of random process techniques in conjunction with digital data reduction methods. The steady-state methods considered use the response power spectral density, response autocorrelation, excitation-response crosspower spectral density, or single-sided Fourier transform (SSFT) of the response autocorrelation function. Non-steady-state methods are associated mainly with the use of rapid frequency sweep excitation. Problems associated with the practical application of each method are evaluated with specific reference to the case of a panel exposed to a turbulent airflow, and two methods, the power spectral density and the single-sided Fourier transform methods, are selected as being the most suitable. These two methods are demonstrated experimentally, and it is shown that the power spectral density method is satisfactory under most conditions, provided that appropriate corrections are applied to account for filter bandwidth and background noise errors. Thus, the response power spectral density method is recommended for the measurement of the damping of panels exposed to a moving airflow.

  15. The QT Interval and Risk of Incident Atrial Fibrillation

    PubMed Central

    Mandyam, Mala C.; Soliman, Elsayed Z.; Alonso, Alvaro; Dewland, Thomas A.; Heckbert, Susan R.; Vittinghoff, Eric; Cummings, Steven R.; Ellinor, Patrick T.; Chaitman, Bernard R.; Stocke, Karen; Applegate, William B.; Arking, Dan E.; Butler, Javed; Loehr, Laura R.; Magnani, Jared W.; Murphy, Rachel A.; Satterfield, Suzanne; Newman, Anne B.; Marcus, Gregory M.

    2013-01-01

    BACKGROUND Abnormal atrial repolarization is important in the development of atrial fibrillation (AF), but no direct measurement is available in clinical medicine. OBJECTIVE To determine whether the QT interval, a marker of ventricular repolarization, could be used to predict incident AF. METHODS We examined a prolonged QT corrected by the Framingham formula (QTFram) as a predictor of incident AF in the Atherosclerosis Risk in Communities (ARIC) study. The Cardiovascular Health Study (CHS) and Health, Aging, and Body Composition (Health ABC) study were used for validation. Secondary predictors included QT duration as a continuous variable, a short QT interval, and QT intervals corrected by other formulae. RESULTS Among 14,538 ARIC participants, a prolonged QTFram predicted a roughly two-fold increased risk of AF (hazard ratio [HR] 2.05, 95% confidence interval [CI] 1.42–2.96, p<0.001). No substantive attenuation was observed after adjustment for age, race, sex, study center, body mass index, hypertension, diabetes, coronary disease, and heart failure. The findings were validated in CHS and Health ABC and were similar across various QT correction methods. Also in ARIC, each 10-ms increase in QTFram was associated with an increased unadjusted (HR 1.14, 95%CI 1.10–1.17, p<0.001) and adjusted (HR 1.11, 95%CI 1.07–1.14, p<0.001) risk of AF. Findings regarding a short QT were inconsistent across cohorts. CONCLUSIONS A prolonged QT interval is associated with an increased risk of incident AF. PMID:23872693

  16. Local bounds preserving stabilization for continuous Galerkin discretization of hyperbolic systems

    NASA Astrophysics Data System (ADS)

    Mabuza, Sibusiso; Shadid, John N.; Kuzmin, Dmitri

    2018-05-01

    The objective of this paper is to present a local bounds preserving stabilized finite element scheme for hyperbolic systems on unstructured meshes based on continuous Galerkin (CG) discretization in space. A CG semi-discrete scheme with low order artificial dissipation that satisfies the local extremum diminishing (LED) condition for systems is used to discretize a system of conservation equations in space. The low order artificial diffusion is based on approximate Riemann solvers for hyperbolic conservation laws. In this case we consider both Rusanov and Roe artificial diffusion operators. In the Rusanov case, two designs are considered, a nodal based diffusion operator and a local projection stabilization operator. The result is a discretization that is LED and has first order convergence behavior. To achieve high resolution, limited antidiffusion is added back to the semi-discrete form where the limiter is constructed from a linearity preserving local projection stabilization operator. The procedure follows the algebraic flux correction procedure usually used in flux corrected transport algorithms. To further deal with phase errors (or terracing) common in FCT type methods, high order background dissipation is added to the antidiffusive correction. The resulting stabilized semi-discrete scheme can be discretized in time using a wide variety of time integrators. Numerical examples involving nonlinear scalar Burgers equation, and several shock hydrodynamics simulations for the Euler system are considered to demonstrate the performance of the method. For time discretization, Crank-Nicolson scheme and backward Euler scheme are utilized.

  17. Adaptive optics in spinning disk microscopy: improved contrast and brightness by a simple and fast method.

    PubMed

    Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J

    2015-09-01

    Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  18. HIV Testing, HIV Positivity, and Linkage and Referral Services in Correctional Facilities in the United States, 2009–2013

    PubMed Central

    Seth, Puja; Figueroa, Argelia; Wang, Guoshen; Reid, Laurie; Belcher, Lisa

    2016-01-01

    Background Because of health disparities, incarcerated persons are at higher risk for multiple health issues, including HIV. Correctional facilities have an opportunity to provide HIV services to an underserved population. This article describes Centers for Disease Control and Prevention (CDC)–funded HIV testing and service delivery in correctional facilities. Methods Data on HIV testing and service delivery were submitted to CDC by 61 health department jurisdictions in 2013. HIV testing, HIV positivity, receipt of test results, linkage, and referral services were described, and differences across demographic characteristics for linkage and referral services were assessed. Finally, trends were examined for HIV testing, HIV positivity, and linkage from 2009 to 2013. Results Of CDC-funded tests in 2013 among persons 18 years and older, 254,719 (7.9%) were conducted in correctional facilities. HIV positivity was 0.9%, and HIV positivity for newly diagnosed persons was 0.3%. Blacks accounted for the highest percentage of HIV-infected persons (1.3%) and newly diagnosed persons (0.5%). Only 37.9% of newly diagnosed persons were linked within 90 days; 67.5% were linked within any time frame; 49.7% were referred to partner services; and 45.2% were referred to HIV prevention services. There was a significant percent increase in HIV testing, overall HIV positivity, and linkage from 2009 to 2013. However, trends were stable for newly diagnosed persons. Conclusions Identification of newly diagnosed persons in correctional facilities has remained stable from 2009 to 2013. Correctional facilities seem to be reaching blacks, likely due to higher incarceration rates. The current findings indicate that improvements are needed in HIV testing strategies, service delivery during incarceration, and linkage to care postrelease. PMID:26462190

  19. Analysis of radiographic bone parameters throughout the surgical lengthening and deformity correction of extremities.

    PubMed

    Atanasov, Nenad; Poposka, Anastasika; Samardziski, Milan; Kamnar, Viktor

    2014-01-01

    Radiographic examination of extremities in surgical lengthening and/or correction of deformities is of crucial importance for the assessment of new bone formation. The purpose of this study is to confirm the diagnostic value of radiography in precise detection of bone parameters in various lengthening or correction stages in patients treated by limb-lengthening and deformity correction. 50 patients were treated by the Ilizarov method of limb lengthening or deformity correction at the University Orthopaedic Surgery Clinic in Skopje, and analysed over the period from 2006 to 2012. The patients were divided into two groups. The first group consisted of 27 patients with limb-lengthening because of congenital shortening. The second group consisted of 23 patients treated for acquired limb deformities. The results in both groups were received in three stages of new bone formation and were based on the appearance of 3 radiographic parameters at the distraction/compression site. The differences between the presence of all radiographic bone parameters in different stages of new bone formation were statistically signficant in both groups, especially the presence of the cortical margin in the first group (Cochran Q=34.43, df=2, p=0.00000). The comparative analysis between the two groups showed a statistically significant difference in the presence of initial bone elements and cystic formations only in the first stage. Almost no statistical significance in the differences between both groups of patients with regard to 3 radiographic parameters in 3 stages of new bone formation, indicates a minor influence of the etiopathogenetic background on the new bone formation in patients treated by gradual lengthening or correction of limb deformities.

  20. A two-dimensional ACAR study of untwinned YBa2Cu3O(7-x)

    NASA Astrophysics Data System (ADS)

    Smedskjaer, L. C.; Bansil, A.

    1991-12-01

    We have carried out 2D-ACAR measurements on an untwinned single crystal of YBa2Cu3O(sub 7-x) as a function of temperature, for five temperatures ranging from 30K to 300K. We show that these temperature-dependent 2D-ACAR spectra can be described to a good approximation as a superposition of two temperature independent spectra with temperature-dependent weighting factors. We show further how the data can be used to correct for the 'background' in the experimental spectrum. Such a 'background corrected' spectrum is in remarkable accord with the corresponding band theory predictions, and displays, in particular, clear signatures of the electron ridge Fermi surface.

  1. Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    Depending on detector number, there are random fluctuations in the background level for spectral band 1 of magnitudes ranging from 2 to 3.5 digital numbers (DN). Similar variability is observed in all the other reflective bands, but with smaller magnitude in the range 0.5 to 2.5 DN. Observations of background reference levels show that line dependent variations in raw TM image data and in the associated calibration data can be measured and corrected within an operational environment by applying simple offset corrections on a line-by-line basis. The radiometric calibration procedure defined by the Canadian Center for Remote Sensing was revised accordingly in order to prevent striping in the output product.

  2. WE-H-207A-02: Attenuation Correction in 4D-PET Using a Single-Phase Attenuation Map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantari, F; Wang, J

    2016-06-15

    Purpose: 4D-PET imaging has been proposed as a potential solution to the respiratory motion effect in thoracic region. CT-based attenuation correction (AC) is an essential step toward quantitative imaging for PET. However, due to the temporal difference of 4D-PET and a single breath-hold CT, motion artifacts are observed in the attenuation-corrected PET images that can lead to error in tumor shape and uptake. We introduce a practical method for aligning single-phase CT to all other 4D-PET phases using a penalized non-rigid demons registration. Methods: Individual 4D-PET frames were reconstructed without AC. Non-rigid Demons registration was used to derive deformation vectormore » fields (DVFs) between the PET matched with CT phase and other 4D-PET images. While attenuated PET images provide enough useful data for organ borders such as lung and liver, tumors are not distinguishable from background due to loss of contrast. To preserve tumor shape in different phases, from CT image an ROI covering tumor was excluded from non-rigid transformation. Mean DVF of the central region of the tumor was assigned to all voxels in the ROI. This process mimics a rigid transformation of tumor along with a non-rigid transformation of other organs. 4D XCAT phantom with spherical tumors in lung with diameters ranging from 10 to 40 mm was used to evaluate the algorithm. Results: Motion related induced artifacts in attenuation-corrected 4D-PET images were significantly reduced. For tumors smaller than 20 mm, non-rigid transformation was capable to provide quantitative results. However, for larger tumors, where tumor self-attenuation is considerable, our combined method yields superior results. Conclusion: We introduced a practical method for deforming a single CT to match all 4D-PET images for accurate AC. Although 4D-PET data include insignificant anatomical information, we showed that they are still useful to estimate DVFs for aligning attenuation map and accurate AC.« less

  3. Highway proximity associated with cardiovascular disease risk: the influence of individual-level confounders and exposure misclassification

    PubMed Central

    2013-01-01

    Background Elevated cardiovascular disease risk has been reported with proximity to highways or busy roadways, but proximity measures can be challenging to interpret given potential confounders and exposure error. Methods We conducted a cross sectional analysis of plasma levels of C-Reactive Protein (hsCRP), Interleukin-6 (IL-6), Tumor Necrosis Factor alpha receptor II (TNF-RII) and fibrinogen with distance of residence to a highway in and around Boston, Massachusetts. Distance was assigned using ortho-photo corrected parcel matching, as well as less precise approaches such as simple parcel matching and geocoding addresses to street networks. We used a combined random and convenience sample of 260 adults >40 years old. We screened a large number of individual-level variables including some infrequently collected for assessment of highway proximity, and included a subset in our final regression models. We monitored ultrafine particle (UFP) levels in the study areas to help interpret proximity measures. Results Using the orthophoto corrected geocoding, in a fully adjusted model, hsCRP and IL-6 differed by distance category relative to urban background: 43% (-16%,141%) and 49% (6%,110%) increase for 0-50 m; 7% (-39%,45%) and 41% (6%,86%) for 50-150 m; 54% (-2%,142%) and 18% (-11%,57%) for 150-250 m, and 49% (-4%, 131%) and 42% (6%, 89%) for 250-450 m. There was little evidence for association for TNF-RII or fibrinogen. Ortho-photo corrected geocoding resulted in stronger associations than traditional methods which introduced differential misclassification. Restricted analysis found the effect of proximity on biomarkers was mostly downwind from the highway or upwind where there was considerable local street traffic, consistent with patterns of monitored UFP levels. Conclusion We found associations between highway proximity and both hsCRP and IL-6, with non-monotonic patterns explained partly by individual-level factors and differences between proximity and UFP concentrations. Our analyses emphasize the importance of controlling for the risk of differential exposure misclassification from geocoding error. PMID:24090339

  4. Net Improvement of Correct Answers to Therapy Questions After PubMed Searches: Pre/Post Comparison

    PubMed Central

    Keepanasseril, Arun

    2013-01-01

    Background Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. Objective To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). Methods We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. Results We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. Conclusions PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers. PMID:24217329

  5. piscope - A Python based software package for the analysis of volcanic SO2 emissions using UV SO2 cameras

    NASA Astrophysics Data System (ADS)

    Gliss, Jonas; Stebel, Kerstin; Kylling, Arve; Solvejg Dinger, Anna; Sihler, Holger; Sudbø, Aasmund

    2017-04-01

    UV SO2 cameras have become a common method for monitoring SO2 emission rates from volcanoes. Scattered solar UV radiation is measured in two wavelength windows, typically around 310 nm and 330 nm (distinct / weak SO2 absorption) using interference filters. The data analysis comprises the retrieval of plume background intensities (to calculate plume optical densities), the camera calibration (to convert optical densities into SO2 column densities) and the retrieval of gas velocities within the plume as well as the retrieval of plume distances. SO2 emission rates are then typically retrieved along a projected plume cross section, for instance a straight line perpendicular to the plume propagation direction. Today, for most of the required analysis steps, several alternatives exist due to ongoing developments and improvements related to the measurement technique. We present piscope, a cross platform, open source software toolbox for the analysis of UV SO2 camera data. The code is written in the Python programming language and emerged from the idea of a common analysis platform incorporating a selection of the most prevalent methods found in literature. piscope includes several routines for plume background retrievals, routines for cell and DOAS based camera calibration including two individual methods to identify the DOAS field of view (shape and position) within the camera images. Gas velocities can be retrieved either based on an optical flow analysis or using signal cross correlation. A correction for signal dilution (due to atmospheric scattering) can be performed based on topographic features in the images. The latter requires distance retrievals to the topographic features used for the correction. These distances can be retrieved automatically on a pixel base using intersections of individual pixel viewing directions with the local topography. The main features of piscope are presented based on dataset recorded at Mt. Etna, Italy in September 2015.

  6. Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.

    PubMed

    Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T

    2008-09-15

    Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with the reported levels at the 95% confidence level.

  7. HST/WFC3: Evolution of the UVIS Channel's Charge Transfer Efficiency

    NASA Astrophysics Data System (ADS)

    Gosmeyer, Catherine; Baggett, Sylvia M.; Anderson, Jay; WFC3 Team

    2016-06-01

    The Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) contains both an IR and a UVIS channel. After more than six years on orbit, the UVIS channel performance remains stable; however, on-orbit radiation damage has caused the charge transfer efficiency (CTE) of UVIS's two CCDs to degrade. This degradation is seen as vertical charge 'bleeding' from sources during readout and its effect evolves as the CCDs age. The WFC3 team has developed software to perform corrections that push the charge back to the sources, although it cannot recover faint sources that have been bled out entirely. Observers can mitigate this effect in various ways such as by placing sources near the amplifiers, observing bright targets, and by increasing the total background to at least 12 electrons, either by using a broader filter, lengthening exposure time, or post-flashing. We present results from six years of calibration data to re-evaluate the best level of total background for mitigating CTE loss and to re-verify that the pixel-based CTE correction software is performing optimally over various background levels. In addition, we alert observers that CTE-corrected products are now available for retrieval from MAST as part of the CALWF3 v3.3 pipeline upgrade.

  8. ELLIPTICAL WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. III. THE EFFECT OF RANDOM COUNT NOISE ON IMAGE MOMENTS IN WEAK LENSING ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp

    This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less

  9. Item and source memory for emotional associates is mediated by different retrieval processes.

    PubMed

    Ventura-Bort, Carlos; Dolcos, Florin; Wendt, Julia; Wirkner, Janine; Hamm, Alfons O; Weymar, Mathias

    2017-12-12

    Recent event-related potential (ERP) data showed that neutral objects encoded in emotional background pictures were better remembered than objects encoded in neutral contexts, when recognition memory was tested one week later. In the present study, we investigated whether this long-term memory advantage for items is also associated with correct memory for contextual source details. Furthermore, we were interested in the possibly dissociable contribution of familiarity and recollection processes (using a Remember/Know procedure). The results revealed that item memory performance was mainly driven by the subjective experience of familiarity, irrespective of whether the objects were previously encoded in emotional or neutral contexts. Correct source memory for the associated background picture, however, was driven by recollection and enhanced when the content was emotional. In ERPs, correctly recognized old objects evoked frontal ERP Old/New effects (300-500ms), irrespective of context category. As in our previous study (Ventura-Bort et al., 2016b), retrieval for objects from emotional contexts was associated with larger parietal Old/New differences (600-800ms), indicating stronger involvement of recollection. Thus, the results suggest a stronger contribution of recollection-based retrieval to item and contextual background source memory for neutral information associated with an emotional event. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Interstellar cyanogen and the temperature of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Roth, Katherine C.; Meyer, David M.; Hawkins, Isabel

    1993-01-01

    We present the results of a recently completed effort to determine the amount of CN rotational excitation in five diffuse interstellar clouds for the purpose of accurately measuring the temperature of the cosmic microwave background radiation (CMBR). In addition, we report a new detection of emission from the strongest hyperfine component of the 2.64 mm CN rotational transition (N = 1-0) in the direction toward HD 21483. We have used this result in combination with existing emission measurements toward our other stars to correct for local excitation effects within diffuse clouds which raise the measured CN rotational temperature above that of the CMBR. After making this correction, we find a weighted mean value of T(CMBR) = 2.729 (+0.023, -0.031) K. This temperature is in excellent agreement with the new COBE measurement of 2.726 +/- 0.010 K (Mather et al., 1993). Our result, which samples the CMBR far from the near-Earth environment, attests to the accuracy of the COBE measurement and reaffirms the cosmic nature of this background radiation. From the observed agreement between our CMBR temperature and the COBE result, we conclude that corrections for local CN excitation based on millimeter emission measurements provide an accurate adjustment to the measured rotational excitation.

  11. Small-scale electrical resistivity tomography of wet fractured rocks.

    PubMed

    LaBrecque, Douglas J; Sharpe, Roger; Wood, Thomas; Heath, Gail

    2004-01-01

    This paper describes a series of experiments that tested the ability of the electrical resistivity tomography (ERT) method to locate correctly wet and dry fractures in a meso-scale model. The goal was to develop a method of monitoring the flow of water through a fractured rock matrix. The model was a four by six array of limestone blocks equipped with 28 stainless steel electrodes. Dry fractures were created by placing pieces of vinyl between one or more blocks. Wet fractures were created by injecting tap water into a joint between blocks. In electrical terms, the dry fractures are resistive and the wet fractures are conductive. The quantities measured by the ERT system are current and voltage around the outside edge of the model. The raw ERT data were translated to resistivity values inside the model using a three-dimensional Occam's inversion routine. This routine was one of the key components of ERT being tested. The model presented several challenges. First, the resistivity of both the blocks and the joints was highly variable. Second, the resistive targets introduced extreme changes the software could not precisely quantify. Third, the abrupt changes inherent in a fracture system were contrary to the smoothly varying changes expected by the Occam's inversion routine. Fourth, the response of the conductive fractures was small compared to the background variability. In general, ERT was able to locate correctly resistive fractures. Problems occurred, however, when the resistive fracture was near the edges of the model or when multiple fractures were close together. In particular, ERT tended to position the fracture closer to the model center than its true location. Conductive fractures yielded much smaller responses than the resistive case. A difference-inversion method was able to correctly locate these targets.

  12. Image segregation in strabismic amblyopia.

    PubMed

    Levi, Dennis M

    2007-06-01

    Humans with naturally occurring amblyopia show deficits thought to involve mechanisms downstream of V1. These include excessive crowding, abnormal global image processing, spatial sampling and symmetry detection and undercounting. Several recent studies suggest that humans with naturally occurring amblyopia show deficits in global image segregation. The current experiments were designed to study figure-ground segregation in amblyopic observers with documented deficits in crowding, symmetry detection, spatial sampling and counting, using similar stimuli. Observers had to discriminate the orientation of a figure (an "E"-like pattern made up of 17 horizontal Gabor patches), embedded in a 7x7 array of Gabor patches. When the 32 "background" patches are vertical, the "E" pops-out, due to segregation by orientation and performance is perfect; however, if the background patches are all, or mostly horizontal, the "E" is camouflaged, and performance is random. Using a method of constant stimuli, we varied the number of "background" patches that were vertical and measured the probability of correct discrimination of the global orientation of the E (up/down/left/right). Surprisingly, amblyopes who showed strong crowding and deficits in symmetry detection and counting, perform normally or very nearly so in this segregation task. I therefore conclude that these deficits are not a consequence of abnormal segregation of figure from background.

  13. International survey of knowledge of food-induced anaphylaxis

    PubMed Central

    Wang, Julie; Young, Michael C.; Nowak-Węgrzyn, Anna

    2014-01-01

    Background Studies show that anaphylaxis is under-recognized and epinephrine (adrenaline) is under-used by medical personnel as well as patients and their families. This study assesses the knowledge of food-induced anaphylaxis diagnosis and management across different populations of providers and caregivers and other interested respondents. Methods An online survey embedded in a case discussion food-induced anaphylaxis was distributed by Medscape to registered members. Results 7822 responders who started the activity chose to answer at least some of the questions presented (response rate 39.5%). Over 80% of responders in all groups correctly identified the case of anaphylaxis with prominent skin and respiratory symptoms, however, only 55% correctly recognized the case without skin symptoms as anaphylaxis. Only 23% of responders correctly selected risk factors for anaphylaxis, with physicians significantly more likely to choose the correct answers as compared to allied health, other health professionals and medical students (p<0.001). Ninety five perecnt selected epinephrine (adrenaline) as the most appropriate treatment for anaphylaxis, and 81% correctly indicated that there are no absolute contraindications for epinephrine (adrenaline) in the setting of anaphylaxis. When presented a case of a child with no documented history of allergies who has symptoms of anaphylaxis, more physicians than any other group chose to administer stock epinephrine (adrenaline) (73% vs 60%, p<0.001). Conclusion Specific knowledge deficits for food-induced anaphylaxis persist across all groups. Further educational efforts should be aimed not only at the medical community but also for the entire caregiver community and general public, to optimize care for food allergic individuals. PMID:25263184

  14. Differential and relaxed image foresting transform for graph-cut segmentation of multiple 3D objects.

    PubMed

    Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K

    2014-01-01

    Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system.

  15. Atmospheric Precorrected Differential Absorption technique to retrieve columnar water vapor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlaepfer, D.; Itten, K.I.; Borel, C.C.

    1998-09-01

    Differential absorption techniques are suitable to retrieve the total column water vapor contents from imaging spectroscopy data. A technique called Atmospheric Precorrected Differential Absorption (APDA) is derived directly from simplified radiative transfer equations. It combines a partial atmospheric correction with a differential absorption technique. The atmospheric path radiance term is iteratively corrected during the retrieval of water vapor. This improves the results especially over low background albedos. The error of the method for various ground reflectance spectra is below 7% for most of the spectra. The channel combinations for two test cases are then defined, using a quantitative procedure, whichmore » is based on MODTRAN simulations and the image itself. An error analysis indicates that the influence of aerosols and channel calibration is minimal. The APDA technique is then applied to two AVIRIS images acquired in 1991 and 1995. The accuracy of the measured water vapor columns is within a range of {+-}5% compared to ground truth radiosonde data.« less

  16. Convergent genetic and expression data implicate immunity in Alzheimer's disease

    PubMed Central

    Jones, Lesley; Lambert, Jean-Charles; Wang, Li-San; Choi, Seung-Hoan; Harold, Denise; Vedernikov, Alexey; Escott-Price, Valentina; Stone, Timothy; Richards, Alexander; Bellenguez, Céline; Ibrahim-Verbaas, Carla A; Naj, Adam C; Sims, Rebecca; Gerrish, Amy; Jun, Gyungah; DeStefano, Anita L; Bis, Joshua C; Beecham, Gary W; Grenier-Boley, Benjamin; Russo, Giancarlo; Thornton-Wells, Tricia A; Jones, Nicola; Smith, Albert V; Chouraki, Vincent; Thomas, Charlene; Ikram, M Arfan; Zelenika, Diana; Vardarajan, Badri N; Kamatani, Yoichiro; Lin, Chiao-Feng; Schmidt, Helena; Kunkle, Brian; Dunstan, Melanie L; Ruiz, Agustin; Bihoreau, Marie-Thérèse; Reitz, Christiane; Pasquier, Florence; Hollingworth, Paul; Hanon, Olivier; Fitzpatrick, Annette L; Buxbaum, Joseph D; Campion, Dominique; Crane, Paul K; Becker, Tim; Gudnason, Vilmundur; Cruchaga, Carlos; Craig, David; Amin, Najaf; Berr, Claudine; Lopez, Oscar L; De Jager, Philip L; Deramecourt, Vincent; Johnston, Janet A; Evans, Denis; Lovestone, Simon; Letteneur, Luc; Kornhuber, Johanes; Tárraga, Lluís; Rubinsztein, David C; Eiriksdottir, Gudny; Sleegers, Kristel; Goate, Alison M; Fiévet, Nathalie; Huentelman, Matthew J; Gill, Michael; Emilsson, Valur; Brown, Kristelle; Kamboh, M Ilyas; Keller, Lina; Barberger-Gateau, Pascale; McGuinness, Bernadette; Larson, Eric B; Myers, Amanda J; Dufouil, Carole; Todd, Stephen; Wallon, David; Love, Seth; Kehoe, Pat; Rogaeva, Ekaterina; Gallacher, John; George-Hyslop, Peter St; Clarimon, Jordi; Lleὀ, Alberti; Bayer, Anthony; Tsuang, Debby W; Yu, Lei; Tsolaki, Magda; Bossù, Paola; Spalletta, Gianfranco; Proitsi, Petra; Collinge, John; Sorbi, Sandro; Garcia, Florentino Sanchez; Fox, Nick; Hardy, John; Naranjo, Maria Candida Deniz; Razquin, Cristina; Bosco, Paola; Clarke, Robert; Brayne, Carol; Galimberti, Daniela; Mancuso, Michelangelo; Moebus, Susanne; Mecocci, Patrizia; del Zompo, Maria; Maier, Wolfgang; Hampel, Harald; Pilotto, Alberto; Bullido, Maria; Panza, Francesco; Caffarra, Paolo; Nacmias, Benedetta; Gilbert, John R; Mayhaus, Manuel; Jessen, Frank; Dichgans, Martin; Lannfelt, Lars; Hakonarson, Hakon; Pichler, Sabrina; Carrasquillo, Minerva M; Ingelsson, Martin; Beekly, Duane; Alavarez, Victoria; Zou, Fanggeng; Valladares, Otto; Younkin, Steven G; Coto, Eliecer; Hamilton-Nelson, Kara L; Mateo, Ignacio; Owen, Michael J; Faber, Kelley M; Jonsson, Palmi V; Combarros, Onofre; O'Donovan, Michael C; Cantwell, Laura B; Soininen, Hilkka; Blacker, Deborah; Mead, Simon; Mosley, Thomas H; Bennett, David A; Harris, Tamara B; Fratiglioni, Laura; Holmes, Clive; de Bruijn, Renee FAG; Passmore, Peter; Montine, Thomas J; Bettens, Karolien; Rotter, Jerome I; Brice, Alexis; Morgan, Kevin; Foroud, Tatiana M; Kukull, Walter A; Hannequin, Didier; Powell, John F; Nalls, Michael A; Ritchie, Karen; Lunetta, Kathryn L; Kauwe, John SK; Boerwinkle, Eric; Riemenschneider, Matthias; Boada, Mercè; Hiltunen, Mikko; Martin, Eden R; Pastor, Pau; Schmidt, Reinhold; Rujescu, Dan; Dartigues, Jean-François; Mayeux, Richard; Tzourio, Christophe; Hofman, Albert; Nöthen, Markus M; Graff, Caroline; Psaty, Bruce M; Haines, Jonathan L; Lathrop, Mark; Pericak-Vance, Margaret A; Launer, Lenore J; Farrer, Lindsay A; van Duijn, Cornelia M; Van Broekhoven, Christine; Ramirez, Alfredo; Schellenberg, Gerard D; Seshadri, Sudha; Amouyel, Philippe; Holmans, Peter A

    2015-01-01

    Background Late–onset Alzheimer's disease (AD) is heritable with 20 genes showing genome wide association in the International Genomics of Alzheimer's Project (IGAP). To identify the biology underlying the disease we extended these genetic data in a pathway analysis. Methods The ALIGATOR and GSEA algorithms were used in the IGAP data to identify associated functional pathways and correlated gene expression networks in human brain. Results ALIGATOR identified an excess of curated biological pathways showing enrichment of association. Enriched areas of biology included the immune response (p = 3.27×10-12 after multiple testing correction for pathways), regulation of endocytosis (p = 1.31×10-11), cholesterol transport (p = 2.96 × 10-9) and proteasome-ubiquitin activity (p = 1.34×10-6). Correlated gene expression analysis identified four significant network modules, all related to the immune response (corrected p 0.002 – 0.05). Conclusions The immune response, regulation of endocytosis, cholesterol transport and protein ubiquitination represent prime targets for AD therapeutics. PMID:25533204

  17. Cardiac auscultation training of medical students: a comparison of electronic sensor-based and acoustic stethoscopes

    PubMed Central

    Høyte, Henning; Jensen, Torstein; Gjesdal, Knut

    2005-01-01

    Background To determine whether the use of an electronic, sensor based stethoscope affects the cardiac auscultation skills of undergraduate medical students. Methods Forty eight third year medical students were randomized to use either an electronic stethoscope, or a conventional acoustic stethoscope during clinical auscultation training. After a training period of four months, cardiac auscultation skills were evaluated using four patients with different cardiac murmurs. Two experienced cardiologists determined correct answers. The students completed a questionnaire for each patient. The thirteen questions were weighted according to their relative importance, and a correct answer was credited from one to six points. Results No difference in mean score was found between the two groups (p = 0.65). Grading and characterisation of murmurs and, if present, report of non existing murmurs were also rated. None of these yielded any significant differences between the groups. Conclusion Whether an electronic or a conventional stethoscope was used during training and testing did not affect the students' performance on a cardiac auscultation test. PMID:15882458

  18. Electroconvulsive therapy use in adolescents: a systematic review

    PubMed Central

    2013-01-01

    Background Considered as a moment of psychological vulnerability, adolescence is remarkably a risky period for the development of psychopathologies, when the choice of the correct therapeutic approach is crucial for achieving remission. One of the researched therapies in this case is electroconvulsive therapy (ECT). The present study reviews the recent and classical aspects regarding ECT use in adolescents. Methods Systematic review, performed in November 2012, conformed to the PRISMA statement. Results From the 212 retrieved articles, only 39 were included in the final sample. The reviewed studies bring indications of ECT use in adolescents, evaluate the efficiency of this therapy regarding remission, and explore the potential risks and complications of the procedure. Conclusions ECT use in adolescents is considered a highly efficient option for treating several psychiatric disorders, achieving high remission rates, and presenting few and relatively benign adverse effects. Risks can be mitigated by the correct use of the technique and are considered minimal when compared to the efficiency of ECT in treating psychopathologies. PMID:23718899

  19. Assessment of Three “WHO” Patient Safety Solutions: Where Do We Stand and What Can We Do?

    PubMed Central

    Banihashemi, Sheida; Hatam, Nahid; Zand, Farid; Kharazmi, Erfan; Nasimi, Soheila; Askarian, Mehrdad

    2015-01-01

    Background: Most medical errors are preventable. The aim of this study was to compare the current execution of the 3 patient safety solutions with WHO suggested actions and standards. Methods: Data collection forms and direct observation were used to determine the status of implementation of existing protocols, resources, and tools. Results: In the field of patient hand-over, there was no standardized approach. In the field of the performance of correct procedure at the correct body site, there were no safety checklists, guideline, and educational content for informing the patients and their families about the procedure. In the field of hand hygiene (HH), although availability of necessary resources was acceptable, availability of promotional HH posters and reminders was substandard. Conclusions: There are some limitations of resources, protocols, and standard checklists in all three areas. We designed some tools that will help both wards to improve patient safety by the implementation of adapted WHO suggested actions. PMID:26900434

  20. Wavefront Measurement in Ophthalmology

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl

    Wavefront sensing or aberration measurement in the eye is a key problem in refractive surgery and vision correction with laser. The accuracy of these measurements is critical for the outcome of the surgery. Practically all clinical methods use laser as a source of light. To better understand the background, we analyze the pre-laser techniques developed over centuries. They allowed new discoveries of the nature of the optical system of the eye, and many served as prototypes for laser-based wavefront sensing technologies. Hartmann's test was strengthened by Platt's lenslet matrix and the CCD two-dimensional photodetector acquired a new life as a Hartmann-Shack sensor in Heidelberg. Tscherning's aberroscope, invented in France, was transformed into a laser device known as a Dresden aberrometer, having seen its reincarnation in Germany with Seiler's help. The clinical ray tracing technique was brought to life by Molebny in Ukraine, and skiascopy was created by Fujieda in Japan. With the maturation of these technologies, new demands now arise for their wider implementation in optometry and vision correction with customized contact and intraocular lenses.

  1. An adaptive tensor voting algorithm combined with texture spectrum

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  2. An AK-LDMeans algorithm based on image clustering

    NASA Astrophysics Data System (ADS)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  3. HST/WFC3: understanding and mitigating radiation damage effects in the CCD detectors

    NASA Astrophysics Data System (ADS)

    Baggett, S. M.; Anderson, J.; Sosey, M.; Gosmeyer, C.; Bourque, M.; Bajaj, V.; Khandrika, H.; Martlin, C.

    2016-07-01

    At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel is a 4096x4096 pixel e2v CCD array. While these detectors continue to perform extremely well after more than 7 years in low-earth orbit, the cumulative effects of radiation damage are becoming increasingly evident. The result is a continual increase of the hotpixel population and the progressive loss in charge-transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. In this report, we summarize the radiation damage effects seen in WFC3/UVIS and the evolution of the CTE losses as a function of time, source brightness, and image-background level. In addition, we discuss the available mitigation options, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low-background images for a relatively small noise penalty. Currently, all WFC3 observers are encouraged to consider post-flash for images with low backgrounds. Finally, a pixel-based CTE correction is available for use after the images have been acquired. Similar to the software in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an observationally-defined model of how much charge is captured and released in order to reconstruct the image. As of Feb 2016, the pixel-based CTE correction is part of the automated WFC3 calibration pipeline. Observers with pre-existing data may request their images from MAST (Mikulski Archive for Space Telescopes) to obtain the improved products.

  4. Lipase-catalysed acylation of starch and determination of the degree of substitution by methanolysis and GC

    PubMed Central

    2010-01-01

    Background Natural polysaccharides such as starch are becoming increasingly interesting as renewable starting materials for the synthesis of biodegradable polymers using chemical or enzymatic methods. Given the complexity of polysaccharides, the analysis of reaction products is challenging. Results Esterification of starch with fatty acids has traditionally been monitored by saponification and back-titration, but in our experience this method is unreliable. Here we report a novel GC-based method for the fast and reliable quantitative determination of esterification. The method was used to monitor the enzymatic esterification of different starches with decanoic acid, using lipase from Thermomyces lanuginosus. The reaction showed a pronounced optimal water content of 1.25 mL per g starch, where a degree of substitution (DS) of 0.018 was obtained. Incomplete gelatinization probably accounts for lower conversion with less water. Conclusions Lipase-catalysed esterification of starch is feasible in aqueous gel systems, but attention to analytical methods is important to obtain correct DS values. PMID:21114817

  5. Security management techniques and evaluative checklists for security force effectiveness. Technical report (final) Sep 80-Jul 81

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schurman, D.L.; Datesman, G.H. Jr; Truitt, J.O.

    The report presents a system for evaluating and correcting deficiencies in security-force effectiveness in licensed nuclear facilities. There are four checklists which security managers can copy directly, or can use as guidelines for developing their own checklists. The checklists are keyed to corrective-action guides found in the body of the report. In addition to the corrective-action guides, the report gives background information on the nature of security systems and discussions of various special problems of the licensed nuclear industry.

  6. Electroweak radiative corrections to the top quark decay

    NASA Astrophysics Data System (ADS)

    Kuruma, Toshiyuki

    1993-12-01

    The top quark, once produced, should be an important window to the electroweak symmetry breaking sector. We compute electroweak radiative corrections to the decay process t→b+W + in order to extract information on the Higgs sector and to fix the background in searches for a possible new physics contribution. The large Yukawa coupling of the top quark induces a new form factor through vertex corrections and causes discrepancy from the tree-level longitudinal W-boson production fraction, but the effect is of order 1% or less for m H<1 TeV.

  7. Chandra ACIS-I particle background: an analytical model

    NASA Astrophysics Data System (ADS)

    Bartalucci, I.; Mazzotta, P.; Bourdin, H.; Vikhlinin, A.

    2014-06-01

    Aims: Imaging and spectroscopy of X-ray extended sources require a proper characterisation of a spatially unresolved background signal. This background includes sky and instrumental components, each of which are characterised by its proper spatial and spectral behaviour. While the X-ray sky background has been extensively studied in previous work, here we analyse and model the instrumental background of the ACIS-I detector on board the Chandra X-ray observatory in very faint mode. Methods: Caused by interaction of highly energetic particles with the detector, the ACIS-I instrumental background is spectrally characterised by the superimposition of several fluorescence emission lines onto a continuum. To isolate its flux from any sky component, we fitted an analytical model of the continuum to observations performed in very faint mode with the detector in the stowed position shielded from the sky, and gathered over the eight-year period starting in 2001. The remaining emission lines were fitted to blank-sky observations of the same period. We found 11 emission lines. Analysing the spatial variation of the amplitude, energy and width of these lines has further allowed us to infer that three lines of these are presumably due to an energy correction artefact produced in the frame store. Results: We provide an analytical model that predicts the instrumental background with a precision of 2% in the continuum and 5% in the lines. We use this model to measure the flux of the unresolved cosmic X-ray background in the Chandra deep field south. We obtain a flux of 10.2+0.5-0.4 × 10-13 erg cm-2 deg-2 s-1 for the [1-2] keV band and (3.8 ± 0.2) × 10-12 erg cm-2 deg-2 s-1 for the [2-8] keV band.

  8. Retrieval of background surface reflectance with BRD components from pre-running BRDF

    NASA Astrophysics Data System (ADS)

    Choi, Sungwon; Lee, Kyeong-Sang; Jin, Donghyun; Lee, Darae; Han, Kyung-Soo

    2016-10-01

    Many countries try to launch satellite to observe the Earth surface. As important of surface remote sensing is increased, the reflectance of surface is a core parameter of the ground climate. But observing the reflectance of surface by satellite have weakness such as temporal resolution and being affected by view or solar angles. The bidirectional effects of the surface reflectance may make many noises to the time series. These noises can lead to make errors when determining surface reflectance. To correct bidirectional error of surface reflectance, using correction model for normalized the sensor data is necessary. A Bidirectional Reflectance Distribution Function (BRDF) is making accuracy higher method to correct scattering (Isotropic scattering, Geometric scattering, Volumetric scattering). To correct bidirectional error of surface reflectance, BRDF was used in this study. To correct bidirectional error of surface reflectance, we apply Bidirectional Reflectance Distribution Function (BRDF) to retrieve surface reflectance. And we apply 2 steps for retrieving Background Surface Reflectance (BSR). The first step is retrieving Bidirectional Reflectance Distribution (BRD) coefficients. Before retrieving BSR, we did pre-running BRDF to retrieve BRD coefficients to correct scatterings (Isotropic scattering, Geometric scattering, Volumetric scattering). In pre-running BRDF, we apply BRDF with observed surface reflectance of SPOT/VEGETATION (VGT-S1) and angular data to get BRD coefficients for calculating scattering. After that, we apply BRDF again in the opposite direction with BRD coefficients and angular data to retrieve BSR as a second step. As a result, BSR has very similar reflectance to one of VGT-S1. And reflectance in BSR is shown adequate. The highest reflectance of BSR is not over 0.4μm in blue channel, 0.45μm in red channel, 0.55μm in NIR channel. And for validation we compare reflectance of clear sky pixel from SPOT/VGT status map data. As a result of comparing BSR with VGT-S1, bias is from 0.0116 to 0.0158 and RMSE is from 0.0459 to 0.0545. They are very reasonable results, so we confirm that BSR is similar to VGT-S1. And weakness of this study is missing pixel in BSR which are observed less time to retrieve BRD components. If missing pixels are filled, BSR is better to retrieve surface products with more accuracy. And we think that after filling the missing pixel and being more accurate, it can be useful data to retrieve surface product which made by surface reflectance like cloud masking and retrieving aerosol.

  9. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  10. Peroxydisulfate Oxidation of L-Ascorbic Acid for Its Direct Spectrophotometric Determination in Dietary Supplements

    NASA Astrophysics Data System (ADS)

    Salkić, M.; Selimović, A.; Pašalić, H.; Keran, H.

    2014-03-01

    A selective and accurate direct spectrophotometric method was developed for the determination of L-as cor bic acid in dietary supplements. Background correction was based on the oxidation of L-ascorbic acid by potassi um peroxydisulfate in an acidic medium. The molar absorptivity of the proposed method was 1.41 · 104 l/(mol · cm) at 265 nm. The method response was linear up to an L-ascorbic acid concentration of 12.00 μg/ml. The detection limit was 0.11 μg/ml, and the relative standard deviation was 0.9 % (n = 7) for 8.00 μg/ml L-ascorbic acid. Other compounds commonly found in the dietary supplements did not interfere with the detection of L-ascorbic acid. The proposed procedure was successfully applied to the determination of L-ascorbic acid in these supplements, and the results obtained agreed with those obtained by iodine titration.

  11. Photographic photometry with Iris diaphragm photometers

    NASA Technical Reports Server (NTRS)

    Schaefer, B. E.

    1981-01-01

    A general method is presented for solving problems encountered in the analysis of Iris diaphragm photometer (IDP) data. The method is used to derive the general shape of the calibration curve, allowing both a more accurate fit to the IDP data for comparison stars and extrapolation to magnitude ranges for which no comparison stars are measured. The profile of starlight incident and the characteristic curve of the plate are both assumed and then used to derive the profile of the star image. An IDP reading is then determined for each star image. A procedure for correcting the effects of a nonconstant background fog level on the plate is also demonstrated. Additional applications of the method are made in the appendix to determine the relation between the radius of a photographic star image and the star's magnitude, and to predict the IDP reading of the 'point of optimum density'.

  12. Intra-individual variation in urinary iodine concentration: effect of statistical correction on population distribution using seasonal three-consecutive-day spot urine in children

    PubMed Central

    Ji, Xiaohong; Liu, Peng; Sun, Zhenqi; Su, Xiaohui; Wang, Wei; Gao, Yanhui; Sun, Dianjun

    2016-01-01

    Objective To determine the effect of statistical correction for intra-individual variation on estimated urinary iodine concentration (UIC) by sampling on 3 consecutive days in four seasons in children. Setting School-aged children from urban and rural primary schools in Harbin, Heilongjiang, China. Participants 748 and 640 children aged 8–11 years were recruited from urban and rural schools, respectively, in Harbin. Primary and secondary outcome measures The spot urine samples were collected once a day for 3 consecutive days in each season over 1 year. The UIC of the first day was corrected by two statistical correction methods: the average correction method (average of days 1, 2; average of days 1, 2 and 3) and the variance correction method (UIC of day 1 corrected by two replicates and by three replicates). The variance correction method determined the SD between subjects (Sb) and within subjects (Sw), and calculated the correction coefficient (Fi), Fi=Sb/(Sb+Sw/di), where di was the number of observations. The UIC of day 1 was then corrected using the following equation: Results The variance correction methods showed the overall Fi was 0.742 for 2 days’ correction and 0.829 for 3 days’ correction; the values for the seasons spring, summer, autumn and winter were 0.730, 0.684, 0.706 and 0.703 for 2 days’ correction and 0.809, 0.742, 0.796 and 0.804 for 3 days’ correction, respectively. After removal of the individual effect, the correlation coefficient between consecutive days was 0.224, and between non-consecutive days 0.050. Conclusions The variance correction method is effective for correcting intra-individual variation in estimated UIC following sampling on 3 consecutive days in four seasons in children. The method varies little between ages, sexes and urban or rural setting, but does vary between seasons. PMID:26920442

  13. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  14. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  15. New decoding methods of interleaved burst error-correcting codes

    NASA Astrophysics Data System (ADS)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  16. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    PubMed Central

    2011-01-01

    Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107

  17. One output function: a misconception of students studying digital systems - a case study

    NASA Astrophysics Data System (ADS)

    Trotskovsky, E.; Sabag, N.

    2015-05-01

    Background:Learning processes are usually characterized by students' misunderstandings and misconceptions. Engineering educators intend to help their students overcome their misconceptions and achieve correct understanding of the concept. This paper describes a misconception in digital systems held by many students who believe that combinational logic circuits should have only one output. Purpose:The current study aims to investigate the roots of the misconception about one-output function and the pedagogical methods that can help students overcome the misconception. Sample:Three hundred and eighty-one students in the Departments of Electrical and Electronics and Mechanical Engineering at an academic engineering college, who learned the same topics of a digital combinational system, participated in the research. Design and method:In the initial research stage, students were taught according to traditional method - first to design a one-output combinational logic system, and then to implement a system with a number of output functions. In the main stage, an experimental group was taught using a new method whereby they were shown how to implement a system with several output functions, prior to learning about one-output systems. A control group was taught using the traditional method. In the replication stage (the third stage), an experimental group was taught using the new method. A mixed research methodology was used to examine the results of the new learning method. Results:Quantitative research showed that the new teaching approach resulted in a statistically significant decrease in student errors, and qualitative research revealed students' erroneous thinking patterns. Conclusions:It can be assumed that the traditional teaching method generates an incorrect mental model of the one-output function among students. The new pedagogical approach prevented the creation of an erroneous mental model and helped students develop the correct conceptual understanding.

  18. Analysis of IAEA Environmental Samples for Plutonium and Uranium by ICP/MS in Support Of International Safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, Orville T.; Olsen, Khris B.; Thomas, May-Lin P.

    2008-05-01

    A method for the separation and determination of total and isotopic uranium and plutonium by ICP-MS was developed for IAEA samples on cellulose-based media. Preparation of the IAEA samples involved a series of redox chemistries and separations using TRU® resin (Eichrom). The sample introduction system, an APEX nebulizer (Elemental Scientific, Inc), provided enhanced nebulization for a several-fold increase in sensitivity and reduction in background. Application of mass bias (ALPHA) correction factors greatly improved the precision of the data. By combining the enhancements of chemical separation, instrumentation and data processing, detection levels for uranium and plutonium approached high attogram levels.

  19. Determination of vitamin B6 by means of differential spectrophotometry in pharmaceutical preparations in the presence of magnesium compounds.

    PubMed

    Muszalska, Izabela; Puchalska, Marta; Sobczak, Agnieszka

    2011-01-01

    The content of pyridoxine hydrochloride in two-component pharmaceutical preparations containing various magnesium compounds was examined. The UV differentiation spectrophotometry was devised and compared with the reference method of high performance liquid chromatography (HPLC). The analysis of the absorbance spectra (A) and its first (D1) and second (D2) derivatives made it possible to establish the appropriate analytical wavelengths (A: 290 nm; D1: 302 nm; D2: 308 nm). It was proved that spectrum differentiation significantly corrects errors resulting from overlapping background especially when the magnesium hydroaspartate, lactate or magnesium lactogluconate is present together with vitamin B6.

  20. Hawking radiation of charged Dirac particles from a Kerr-Newman black hole

    NASA Astrophysics Data System (ADS)

    Zhou, Shiwei; Liu, Wenbiao

    2008-05-01

    Charged Dirac particles’ Hawking radiation from a Kerr-Newman black hole is calculated using Damour-Ruffini’s method. When energy conservation and the backreaction of particles to the space-time are considered, the emission spectrum is not purely thermal anymore. The leading term is exactly the Boltzman factor, and the deviation from the purely thermal spectrum can bring some information out, which can be treated as an explanation to the information loss paradox. The result can also be treated as a quantum-corrected radiation temperature, which is dependent on the black hole background and the radiation particle’s energy, angular momentum, and charge.

  1. Spotting effect in microarray experiments

    PubMed Central

    Mary-Huard, Tristan; Daudin, Jean-Jacques; Robin, Stéphane; Bitton, Frédérique; Cabannes, Eric; Hilson, Pierre

    2004-01-01

    Background Microarray data must be normalized because they suffer from multiple biases. We have identified a source of spatial experimental variability that significantly affects data obtained with Cy3/Cy5 spotted glass arrays. It yields a periodic pattern altering both signal (Cy3/Cy5 ratio) and intensity across the array. Results Using the variogram, a geostatistical tool, we characterized the observed variability, called here the spotting effect because it most probably arises during steps in the array printing procedure. Conclusions The spotting effect is not appropriately corrected by current normalization methods, even by those addressing spatial variability. Importantly, the spotting effect may alter differential and clustering analysis. PMID:15151695

  2. Variations of the liver standardized uptake value in relation to background blood metabolism: An 2-[18F]Fluoro-2-deoxy-D-glucose positron emission tomography/computed tomography study in a large population from China.

    PubMed

    Liu, Guobing; Hu, Yan; Zhao, Yanzhao; Yu, Haojun; Hu, Pengcheng; Shi, Hongcheng

    2018-05-01

    To investigate the influence of background blood metabolism on liver uptake of 2-[F]fluoro-2-deoxy-D-glucose (F-FDG) and search for an appropriate corrective method.Positron emission tomography/computed tomography (PET/CT) and common serological biochemical tests of 633 healthy people were collected retrospectively. The mean standardized uptake value (SUV) of the liver, liver artery, and portal vein (i.e., SUVL, SUVA, and SUVP) were measured. SUVL/A was calculated as SUVL/SUVA, while SUVL/P was calculated as SUVL/SUVP. SUV of liver parenchyma (SUVLP) was calculated as SUVL - .3 × (.75 × SUVP + .25 × SUVA). The coefficients of variation (CV) of SUVL, SUVL/A, SUVL/P, and SUVLP were compared to assess their interindividual variations. Univariate and multivariate analyses were performed to identify vulnerabilities of these SUV indexes to common factors assessed using serological liver functional tests.SUVLP was significantly larger than SUVL (2.19 ± .497 vs 1.88 ± .495, P < .001), while SUVL/P was significantly smaller than SUVL (1.72 ± .454 vs 1.88 ± .495, P < .001). The difference between SUVL/A and SUVL was not significant (1.83 ± .500 vs 1.88 ± .495, P = .130). The CV of SUVLP (22.7%) was significantly smaller than that of SUVL (22.7%:26.3%, P < .001), while the CVs of SUVL/A (27.2%) and SUVL/P (26.4%) were not different from that of SUVL (P = .429 and .929, respectively). Fewer variables independently influenced SUVLP than influenced SUVL, SUVL/A, and SUVL/P; Only aspartate aminotransferase, body mass index, and total cholesterol, all P-values <.05.The activity of background blood influences the variation of liver SUV. SUVLP might be an alternative corrective method to reduce this influence, as its interindividual variation and vulnerability to effects from common factors of serological liver functional tests are relatively lower than the commonly used SUVL.

  3. A Quantitative and Standardized Method for the Evaluation of Choroidal Neovascularization Using MICRON III Fluorescein Angiograms in Rats

    PubMed Central

    Wigg, Jonathan P.; Zhang, Hong; Yang, Dong

    2015-01-01

    Introduction In-vivo imaging of choroidal neovascularization (CNV) has been increasingly recognized as a valuable tool in the investigation of age-related macular degeneration (AMD) in both clinical and basic research applications. Arguably the most widely utilised model replicating AMD is laser generated CNV by rupture of Bruch’s membrane in rodents. Heretofore CNV evaluation via in-vivo imaging techniques has been hamstrung by a lack of appropriate rodent fundus camera and a non-standardised analysis method. The aim of this study was to establish a simple, quantifiable method of fluorescein fundus angiogram (FFA) image analysis for CNV lesions. Methods Laser was applied to 32 Brown Norway Rats; FFA images were taken using a rodent specific fundus camera (Micron III, Phoenix Laboratories) over 3 weeks and compared to conventional ex-vivo CNV assessment. FFA images acquired with fluorescein administered by intraperitoneal injection and intravenous injection were compared and shown to greatly influence lesion properties. Utilising commonly used software packages, FFA images were assessed for CNV and chorioretinal burns lesion area by manually outlining the maximum border of each lesion and normalising against the optic nerve head. Net fluorescence above background and derived value of area corrected lesion intensity were calculated. Results CNV lesions of rats treated with anti-VEGF antibody were significantly smaller in normalised lesion area (p<0.001) and fluorescent intensity (p<0.001) than the PBS treated control two weeks post laser. The calculated area corrected lesion intensity was significantly smaller (p<0.001) in anti-VEGF treated animals at 2 and 3 weeks post laser. The results obtained using FFA correlated with, and were confirmed by conventional lesion area measurements from isolectin stained choroidal flatmounts, where lesions of anti-VEGF treated rats were significantly smaller at 2 weeks (p = 0.049) and 3 weeks (p<0.001) post laser. Conclusion The presented method of in-vivo FFA quantification of CNV, including acquisition variable corrections, using the Micron III system and common use software establishes a reliable method for detecting and quantifying CNV enabling longitudinal studies and represents an important alternative to conventional CNV quantification methods. PMID:26024231

  4. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    NASA Astrophysics Data System (ADS)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.

  5. Electroweak Corrections to pp→μ^{+}μ^{-}e^{+}e^{-}+X at the LHC: A Higgs Boson Background Study.

    PubMed

    Biedermann, B; Denner, A; Dittmaier, S; Hofer, L; Jäger, B

    2016-04-22

    The first complete calculation of the next-to-leading-order electroweak corrections to four-lepton production at the LHC is presented, where all off-shell effects of intermediate Z bosons and photons are taken into account. Focusing on the mixed final state μ^{+}μ^{-}e^{+}e^{-}, we study differential cross sections that are particularly interesting for Higgs boson analyses. The electroweak corrections are divided into photonic and purely weak corrections. The former exhibit patterns familiar from similar W- or Z-boson production processes with very large radiative tails near resonances and kinematical shoulders. The weak corrections are of the generic size of 5% and show interesting variations, in particular, a sign change between the regions of resonant Z-pair production and the Higgs signal.

  6. [Microhemocirculation and its correction in duodenal ulcer during period of rehabilitation].

    PubMed

    Parpibaeva, D A; Zakirkhodzhaev, Sh Ia; Sagatov, T A; Shakirova, D T; Narziev, N M

    2009-01-01

    The background of this research is to study morphological and functional microcirculatory channel condition with duodenum ulcer in the period of rehabilitation against the background of regular antiulcer therapy (1 group) and further treatment using Vazonit (2 group) in clinical conditions. EDU in animals results in marked microcirculatory disease in duodenum depending on the time of ulcer process development. Hypoxia is to be the significant factor associated with capillary stases, venous congestion. Blood flow impairment in organ results in metabolic damages of tissue structures. The results obtained are evidence of significant correction of microcirculatory channel state, improvement of regeneration and reparation processes. Vazonit improves the disorder of microcirculation and theological blood properties, restoring of macro and microangiopathy changes of hemocirculatory channel.

  7. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

  8. Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.

    PubMed

    Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki

    2018-03-01

    To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .

  9. Organizational stressors associated with job stress and burnout in correctional officers: a systematic review

    PubMed Central

    2013-01-01

    Background In adult correctional facilities, correctional officers (COs) are responsible for the safety and security of the facility in addition to aiding in offender rehabilitation and preventing recidivism. COs experience higher rates of job stress and burnout that stem from organizational stressors, leading to negative outcomes for not only the CO but the organization as well. Effective interventions could aim at targeting organizational stressors in order to reduce these negative outcomes as well as COs’ job stress and burnout. This paper fills a gap in the organizational stress literature among COs by systematically reviewing the relationship between organizational stressors and CO stress and burnout in adult correctional facilities. In doing so, the present review identifies areas that organizational interventions can target in order to reduce CO job stress and burnout. Methods A systematic search of the literature was conducted using Medline, PsycINFO, Criminal Justice Abstracts, and Sociological Abstracts. All retrieved articles were independently screened based on criteria developed a priori. All included articles underwent quality assessment. Organizational stressors were categorized according to Cooper and Marshall’s (1976) model of job stress. Results The systematic review yielded 8 studies that met all inclusion and quality assessment criteria. The five categories of organizational stressors among correctional officers are: stressors intrinsic to the job, role in the organization, rewards at work, supervisory relationships at work and the organizational structure and climate. The organizational structure and climate was demonstrated to have the most consistent relationship with CO job stress and burnout. Conclusions The results of this review indicate that the organizational structure and climate of correctional institutions has the most consistent relationship with COs’ job stress and burnout. Limitations of the studies reviewed include the cross-sectional design and the use of varying measures for organizational stressors. The results of this review indicate that interventions should aim to improve the organizational structure and climate of the correctional facility by improving communication between management and COs. PMID:23356379

  10. Assessment of the knowledge of graphical symbols labelled on malaria rapid diagnostic tests in four international settings

    PubMed Central

    2011-01-01

    Background Graphical symbols on in vitro diagnostics (IVD symbols) replace the need for text in different languages and are used on malaria rapid diagnostic tests (RDTs) marketed worldwide. The present study assessed the comprehension of IVD symbols labelled on malaria RDT kits among laboratory staff in four different countries. Methods Participants (n = 293) in Belgium (n = 96), the Democratic Republic of the Congo (DRC, n = 87), Cambodia (n = 59) and Cuba (n = 51) were presented with an anonymous questionnaire with IVD symbols extracted from ISO 15223 and EN 980 presented as stand-alone symbols (n = 18) and in context (affixed on RDT packages, n = 16). Responses were open-ended and scored for correctness by local professionals. Results Presented as stand-alone, three and five IVD symbols were correctly scored for comprehension by 67% and 50% of participants; when contextually presented, five and seven symbols reached the 67% and 50% correct score respectively. 'Batch code' scored best (correctly scored by 71.3% of participants when presented as stand-alone), 'Authorized representative in the European Community' scored worst (1.4% correct). Another six IVD symbols were scored correctly by less than 10% of participants: 'Do not reuse', 'In vitro diagnostic medical device', 'Sufficient for', 'Date of manufacture', 'Authorised representative in EC', and 'Do not use if package is damaged'. Participants in Belgium and Cuba both scored six symbols above the 67% criterion, participants from DRC and Cambodia scored only two and one symbols above this criterion. Low correct scores were observed for safety-related IVD symbols, such as for 'Biological Risk' (42.7%) and 'Do not reuse' (10.9%). Conclusion Comprehension of IVD symbols on RDTs among laboratory staff in four international settings was unsatisfactory. Administrative and outreach procedures should be undertaken to assure their acquaintance by end-users. PMID:22047089

  11. Influence of uncorrected refractive error and unmet refractive error on visual impairment in a Brazilian population

    PubMed Central

    2014-01-01

    Background The World Health Organization (WHO) definitions of blindness and visual impairment are widely based on best-corrected visual acuity excluding uncorrected refractive errors (URE) as a visual impairment cause. Recently, URE was included as a cause of visual impairment, thus emphasizing the burden of visual impairment due to refractive error (RE) worldwide is substantially higher. The purpose of the present study is to determine the reversal of visual impairment and blindness in the population correcting RE and possible associations between RE and individual characteristics. Methods A cross-sectional study was conducted in nine counties of the western region of state of São Paulo, using systematic and random sampling of households between March 2004 and July 2005. Individuals aged more than 1 year old were included and were evaluated for demographic data, eye complaints, history, and eye exam, including no corrected visual acuity (NCVA), best corrected vision acuity (BCVA), automatic and manual refractive examination. The definition adopted for URE was applied to individuals with NCVA > 0.15 logMAR and BCVA ≤ 0.15 logMAR after refractive correction and unmet refractive error (UREN), individuals who had visual impairment or blindness (NCVA > 0.5 logMAR) and BCVA ≤ 0.5 logMAR after optical correction. Results A total of 70.2% of subjects had normal NCVA. URE was detected in 13.8%. Prevalence of 4.6% of optically reversible low vision and 1.8% of blindness reversible by optical correction were found. UREN was detected in 6.5% of individuals, more frequently observed in women over the age of 50 and in higher RE carriers. Visual impairment related to eye diseases is not reversible with spectacles. Using multivariate analysis, associations between URE and UREN with regard to sex, age and RE was observed. Conclusion RE is an important cause of reversible blindness and low vision in the Brazilian population. PMID:24965318

  12. Measuring 226Ra/228Ra in Oceanic Lavas by MC-ICPMS

    NASA Astrophysics Data System (ADS)

    Standish, J. J.; Sims, K.; Ball, L.; Blusztajn, J.

    2007-12-01

    238U-230Th-226Ra disequilibrium in volcanic rocks provides an important and unique tool to evaluate timescales of recent magmatic processes. Determination of 230Th-226Ra disequilibria requires measurement of U and Th isotopes and concentrations as well as measurement of 226Ra. While measurement of U and Th by ICPMS is now well established, few published studies documenting 226Ra measurement via ICPMS exist. Using 228Ra as an isotope spike we have investigated two ion-counting methods; a 'peak-hopping' routine, where 226Ra and 228Ra are measured in sequence on the central discrete dynode ETP secondary electron multiplier (SEM), and simultaneous measurement of 226Ra and 228Ra on two multiple ion-counter system (MICS) channeltron type detectors mounted on the low end of the collector block. Here we present 226Ra measurement by isotope dilution using the Thermo Fisher NEPTUNE MC-ICPMS. Analysis of external rock standards TML and AThO along with mid-ocean ridge basalt (MORB) and ocean island basalt (OIB) samples show three issues that need to be considered when making precise and accurate Ra measurements: 1) mass bias, 2) background, and 3) relative efficiencies of the detectors when measuring in MICS mode. Due to the absence of an established 226Ra/228Ra standard, we have used U reference material NBL-112A to monitor mass bias. Although Ball et. al., (in press) have shown that U does not serve as an adequate proxy for Th (and thus not likely for Ra either), measurements of rock standards TML and AThO are repeatedly in equilibrium within the uncertainty of the measurements (where total uncertainty includes propagation of the uncertainty in the 226Ra standard used for calibrating the 228Ra spike). For this application, U is an adequate proxy for Ra mass bias at the 1% uncertainly level. The more important issue is the background correction. Because of the extensive chemistry required to separate and purify Ra (typically fg/g level in volcanic rocks), we observe large ambient backgrounds using both ion-counting techniques, which can significantly influence the measured 226Ra/228Ra ratio. Ra off-peak backgrounds need to be measured explicitly and quantitatively corrected. One advantage of using a 'peak-hopping' routine on the central SEM is the optional use of the high abundance sensitivity lens or repelling potential quadrapole (RPQ). This lens virtually eliminates the ambient background and significantly enhances the signal to noise ratio with only a small decrease in Ra ion transmission. Even with the diminished background levels observed using 'peak-hopping' on the SEM with the RPQ, accurate measurement of Ra isotopes requires off-peak background measurement. Finally, when using MICS it is important to account for the relative efficiency of the detectors. Multiple ion counting is, in principle, preferable to 'peak-hopping' because more time is spent counting each individual isotope. However, our results illustrate that proper calibration of detector yields requires dynamic switching of 226Ra between the two ion counters. This negates the inherent advantage of multiple ion counting. Therefore, when considering mass bias, background correction, and detector gain calibration, we conclude that 'peak-hopping' on the central SEM with the RPQ abundance filter is the preferred technique for 226Ra/228Ra isotopic measurement on the Neptune MC-ICPMS.

  13. Evidence-Based Efficacy of Autologous Grated Cartilage in Primary and Secondary Rhinoplasty

    PubMed Central

    Manafi, Ali; Kaviani, Ali; Hamedi, Zahra Sadat; Rajabiani, Afsaneh; Manafi, Navid

    2017-01-01

    BACKGROUND There are numerous methods to mold and shape cartilage grafts for use in rhinoplasty. Each technique has advantages and disadvantages. We are going to introduce a new method for cartilage shaping with long lasting effects confirmed by follow up examination and pathologic evaluation. METHODS Grated cartilage was used in 483 patients. For 89 cases, it was wrapped in fascia and in 394 patients, used as a filler per se or in contiguity with solid structural grafts. In 51 patients, the operation was primary rhinoplasty and 432 cases, underwent secondary rhinoplasty. Postoperatively, there was a mean follow up of 2.8 years. Graft viability, and capability to maintain almost original volume, and general durability were assessed. RESULTS Out of 483 patients, only 23 cases (4.7%) needed later correction. In 11 cases (2%), it was due to overcorrection and some minor imperfections. In the rest 12 cases (2%), there was a need for more augmentation probably due to some degree of graft resorption. Three cases of these 12 patients, were corrected by outpatient shaved cartilage injection. CONCLUSION According to the very low revision rate (less than 5%), we strongly recommend our grated cartilage graft for use in primary and secondary rhinoplasty. Our study showed that patient and surgeon`s satisfaction can be achieved with a high degree of confidence. PMID:28713702

  14. NURD: an implementation of a new method to estimate isoform expression from non-uniform RNA-seq data

    PubMed Central

    2013-01-01

    Background RNA-Seq technology has been used widely in transcriptome study, and one of the most important applications is to estimate the expression level of genes and their alternative splicing isoforms. There have been several algorithms published to estimate the expression based on different models. Recently Wu et al. published a method that can accurately estimate isoform level expression by considering position-related sequencing biases using nonparametric models. The method has advantages in handling different read distributions, but there hasn’t been an efficient program to implement this algorithm. Results We developed an efficient implementation of the algorithm in the program NURD. It uses a binary interval search algorithm. The program can correct both the global tendency of sequencing bias in the data and local sequencing bias specific to each gene. The correction makes the isoform expression estimation more reliable under various read distributions. And the implementation is computationally efficient in both the memory cost and running time and can be readily scaled up for huge datasets. Conclusion NURD is an efficient and reliable tool for estimating the isoform expression level. Given the reads mapping result and gene annotation file, NURD will output the expression estimation result. The package is freely available for academic use at http://bioinfo.au.tsinghua.edu.cn/software/NURD/. PMID:23837734

  15. Development and Validation of a Qualitative Method for Target Screening of 448 Pesticide Residues in Fruits and Vegetables Using UHPLC/ESI Q-Orbitrap Based on Data-Independent Acquisition and Compound Database.

    PubMed

    Wang, Jian; Chow, Willis; Chang, James; Wong, Jon W

    2017-01-18

    A semiautomated qualitative method for target screening of 448 pesticide residues in fruits and vegetables was developed and validated using ultrahigh-performance liquid chromatography coupled with electrospray ionization quadrupole Orbitrap high-resolution mass spectrometry (UHPLC/ESI Q-Orbitrap). The Q-Orbitrap Full MS/dd-MS 2 (data dependent acquisition) was used to acquire product-ion spectra of individual pesticides to build a compound database or an MS library, while its Full MS/DIA (data independent acquisition) was utilized for sample data acquisition from fruit and vegetable matrices fortified with pesticides at 10 and 100 μg/kg for target screening purpose. Accurate mass, retention time and response threshold were three key parameters in a compound database that were used to detect incurred pesticide residues in samples. The concepts and practical aspects of in-spectrum mass correction or solvent background lock-mass correction, retention time alignment and response threshold adjustment are discussed while building a functional and working compound database for target screening. The validated target screening method is capable of screening at least 94% and 99% of 448 pesticides at 10 and 100 μg/kg, respectively, in fruits and vegetables without having to evaluate every compound manually during data processing, which significantly reduced the workload in routine practice.

  16. Preferred color correction for digital LCD TVs

    NASA Astrophysics Data System (ADS)

    Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho

    2009-01-01

    Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.

  17. The Generalized Centroid Difference method for lifetime measurements via γ-γ coincidences using large fast-timing arrays

    NASA Astrophysics Data System (ADS)

    Régis, J.-M.; Jolie, J.; Mach, H.; Simpson, G. S.; Blazhev, A.; Pascovici, G.; Pfeiffer, M.; Rudigier, M.; Saed-Samii, N.; Warr, N.; Blanc, A.; de France, G.; Jentschel, M.; Köster, U.; Mutti, P.; Soldner, T.; Ur, C. A.; Urban, W.; Bruce, A. M.; Drouet, F.; Fraile, L. M.; Ilieva, S.; Korten, W.; Kröll, T.; Lalkovski, S.; Mărginean, S.; Paziy, V.; Podolyák, Zs.; Regan, P. H.; Stezowski, O.; Vancraeyenest, A.

    2015-05-01

    A novel method for direct electronic "fast-timing" lifetime measurements of nuclear excited states via γ-γ coincidences using an array equipped with N very fast high-resolution LaBr3(Ce) scintillator detectors is presented. The generalized centroid difference method provides two independent "start" and "stop" time spectra obtained without any correction by a superposition of the N(N - 1)/2 calibrated γ-γ time difference spectra of the N detector fast-timing system. The two fast-timing array time spectra correspond to a forward and reverse gating of a specific γ-γ cascade and the centroid difference as the time shift between the centroids of the two time spectra provides a picosecond-sensitive mirror-symmetric observable of the set-up. The energydependent mean prompt response difference between the start and stop events is calibrated and used as a single correction for lifetime determination. These combined fast-timing array mean γ-γ zero-time responses can be determined for 40 keV < Eγ < 1.4 MeV with a precision better than 10 ps using a 152Eu γ-ray source. The new method is described with examples of (n,γ) and (n,f,γ) experiments performed at the intense cold-neutron beam facility PF1B of the Institut Laue-Langevin in Grenoble, France, using 16 LaBr3(Ce) detectors within the EXILL&FATIMA campaign in 2013. The results are discussed with respect to possible systematic errors induced by background contributions.

  18. Challenges in microarray class discovery: a comprehensive examination of normalization, gene selection and clustering

    PubMed Central

    2010-01-01

    Background Cluster analysis, and in particular hierarchical clustering, is widely used to extract information from gene expression data. The aim is to discover new classes, or sub-classes, of either individuals or genes. Performing a cluster analysis commonly involve decisions on how to; handle missing values, standardize the data and select genes. In addition, pre-processing, involving various types of filtration and normalization procedures, can have an effect on the ability to discover biologically relevant classes. Here we consider cluster analysis in a broad sense and perform a comprehensive evaluation that covers several aspects of cluster analyses, including normalization. Result We evaluated 2780 cluster analysis methods on seven publicly available 2-channel microarray data sets with common reference designs. Each cluster analysis method differed in data normalization (5 normalizations were considered), missing value imputation (2), standardization of data (2), gene selection (19) or clustering method (11). The cluster analyses are evaluated using known classes, such as cancer types, and the adjusted Rand index. The performances of the different analyses vary between the data sets and it is difficult to give general recommendations. However, normalization, gene selection and clustering method are all variables that have a significant impact on the performance. In particular, gene selection is important and it is generally necessary to include a relatively large number of genes in order to get good performance. Selecting genes with high standard deviation or using principal component analysis are shown to be the preferred gene selection methods. Hierarchical clustering using Ward's method, k-means clustering and Mclust are the clustering methods considered in this paper that achieves the highest adjusted Rand. Normalization can have a significant positive impact on the ability to cluster individuals, and there are indications that background correction is preferable, in particular if the gene selection is successful. However, this is an area that needs to be studied further in order to draw any general conclusions. Conclusions The choice of cluster analysis, and in particular gene selection, has a large impact on the ability to cluster individuals correctly based on expression profiles. Normalization has a positive effect, but the relative performance of different normalizations is an area that needs more research. In summary, although clustering, gene selection and normalization are considered standard methods in bioinformatics, our comprehensive analysis shows that selecting the right methods, and the right combinations of methods, is far from trivial and that much is still unexplored in what is considered to be the most basic analysis of genomic data. PMID:20937082

  19. {sup 90}Y -PET imaging: Exploring limitations and accuracy under conditions of low counts and high random fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlier, Thomas, E-mail: thomas.carlier@chu-nantes.fr; Willowson, Kathy P.; Fourkal, Eugene

    Purpose: {sup 90}Y -positron emission tomography (PET) imaging is becoming a recognized modality for postinfusion quantitative assessment following radioembolization therapy. However, the extremely low counts and high random fraction associated with {sup 90}Y -PET may significantly impair both qualitative and quantitative results. The aim of this work was to study image quality and noise level in relation to the quantification and bias performance of two types of Siemens PET scanners when imaging {sup 90}Y and to compare experimental results with clinical data from two types of commercially available {sup 90}Y microspheres. Methods: Data were acquired on both Siemens Biograph TruePointmore » [non-time-of-flight (TOF)] and Biograph microcomputed tomography (mCT) (TOF) PET/CT scanners. The study was conducted in three phases. The first aimed to assess quantification and bias for different reconstruction methods according to random fraction and number of true counts in the scan. The NEMA 1994 PET phantom was filled with water with one cylindrical insert left empty (air) and the other filled with a solution of {sup 90}Y . The phantom was scanned for 60 min in the PET/CT scanner every one or two days. The second phase used the NEMA 2001 PET phantom to derive noise and image quality metrics. The spheres and the background were filled with a {sup 90}Y solution in an 8:1 contrast ratio and four 30 min acquisitions were performed over a one week period. Finally, 32 patient data (8 treated with Therasphere{sup ®} and 24 with SIR-Spheres{sup ®}) were retrospectively reconstructed and activity in the whole field of view and the liver was compared to theoretical injected activity. Results: The contribution of both bremsstrahlung and LSO trues was found to be negligible, allowing data to be decay corrected to obtain correct quantification. In general, the recovered activity for all reconstruction methods was stable over the range studied, with a small bias appearing at extremely high random fraction and low counts for iterative algorithms. Point spread function (PSF) correction and TOF reconstruction in general reduce background variability and noise and increase recovered concentration. Results for patient data indicated a good correlation between the expected and PET reconstructed activities. A linear relationship between the expected and the measured activities in the organ of interest was observed for all reconstruction method used: a linearity coefficient of 0.89 ± 0.05 for the Biograph mCT and 0.81 ± 0.05 for the Biograph TruePoint. Conclusions: Due to the low counts and high random fraction, accurate image quantification of {sup 90}Y during selective internal radionuclide therapy is affected by random coincidence estimation, scatter correction, and any positivity constraint of the algorithm. Nevertheless, phantom and patient studies showed that the impact of number of true and random coincidences on quantitative results was found to be limited as long as ordinary Poisson ordered subsets expectation maximization reconstruction algorithms with random smoothing are used. Adding PSF correction and TOF information to the reconstruction greatly improves the image quality in terms of bias, variability, noise reduction, and detectability. On the patient studies, the total activity in the field of view is in general accurately measured by Biograph mCT and slightly overestimated by the Biograph TruePoint.« less

  20. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  1. Detection of cyst using image segmentation and building knowledge-based intelligent decision support system as an aid to telemedicine

    NASA Astrophysics Data System (ADS)

    Janet, J.; Natesan, T. R.; Santhosh, Ramamurthy; Ibramsha, Mohideen

    2005-02-01

    An intelligent decision support tool to the Radiologist in telemedicine is described. Medical prescriptions are given based on the images of cyst that has been transmitted over computer networks to the remote medical center. The digital image, acquired by sonography, is converted into an intensity image. This image is then subjected to image preprocessing which involves correction methods to eliminate specific artifacts. The image is resized into a 256 x 256 matrix by using bilinear interpolation method. The background area is detected using distinct block operation. The area of the cyst is calculated by removing the background area from the original image. Boundary enhancement and morphological operations are done to remove unrelated pixels. This gives us the cyst volume. This segmented image of the cyst is sent to the remote medical center for analysis by Knowledge based artificial Intelligent Decision Support System (KIDSS). The type of cyst is detected and reported to the control mechanism of KIDSS. Then the inference engine compares this with the knowledge base and gives appropriate medical prescriptions or treatment recommendations by applying reasoning mechanisms at the remote medical center.

  2. Calculation of the compounded uncertainty of 14C AMS measurements

    NASA Astrophysics Data System (ADS)

    Nadeau, Marie-Josée; Grootes, Pieter M.

    2013-01-01

    The correct method to calculate conventional 14C ages from the carbon isotopic ratios was summarised 35 years ago by Stuiver and Polach (1977) and is now accepted as the only method to calculate 14C ages. There is, however, no consensus regarding the treatment of AMS data, mainly of the uncertainty of the final result. The estimation and treatment of machine background, process blank, and/or in situ contamination is not uniform between laboratories, leading to differences in 14C results, mainly for older ages. As Donahue (1987) and Currie (1994), among others, mentioned, some laboratories find it important to use the scatter of several measurements as uncertainty while others prefer to use Poisson statistics. The contribution of the scatter of the standards, machine background, process blank, and in situ contamination to the uncertainty of the final 14C result is also treated in different ways. In the early years of AMS, several laboratories found it important to describe their calculation process in details. In recent years, this practise has declined. We present an overview of the calculation process for 14C AMS measurements looking at calculation practises published from the beginning of AMS until present.

  3. Anomaly-corrected supersymmetry algebra and supersymmetric holographic renormalization

    NASA Astrophysics Data System (ADS)

    An, Ok Song

    2017-12-01

    We present a systematic approach to supersymmetric holographic renormalization for a generic 5D N=2 gauged supergravity theory with matter multiplets, including its fermionic sector, with all gauge fields consistently set to zero. We determine the complete set of supersymmetric local boundary counterterms, including the finite counterterms that parameterize the choice of supersymmetric renormalization scheme. This allows us to derive holographically the superconformal Ward identities of a 4D superconformal field theory on a generic background, including the Weyl and super-Weyl anomalies. Moreover, we show that these anomalies satisfy the Wess-Zumino consistency condition. The super-Weyl anomaly implies that the fermionic operators of the dual field theory, such as the supercurrent, do not transform as tensors under rigid supersymmetry on backgrounds that admit a conformal Killing spinor, and their anticommutator with the conserved supercharge contains anomalous terms. This property is explicitly checked for a toy model. Finally, using the anomalous transformation of the supercurrent, we obtain the anomaly-corrected supersymmetry algebra on curved backgrounds admitting a conformal Killing spinor.

  4. LWIR pupil imaging and prospects for background compensation

    NASA Astrophysics Data System (ADS)

    LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg

    2015-08-01

    A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.

  5. Influence of the partial volume correction method on 18F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM

    NASA Astrophysics Data System (ADS)

    Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian

    2013-10-01

    Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose (18F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.

  6. Influence of the partial volume correction method on (18)F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM.

    PubMed

    Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian

    2013-10-21

    Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.

  7. Gold in natural water: A method of determination by solvent extraction and electrothermal atomization

    USGS Publications Warehouse

    McHugh, J.B.

    1984-01-01

    A method has been developed using electrothermal atomization to effectively determine the amount of gold in natural water within the nanogram range. The method has four basic steps: (1) evaporating a 1-L sample; (2) putting it in hydrobromic acid-bromine solution; (3) extracting the sample with methyl-isobutyl-ketone; and (4) determining the amount of gold using an atomic absorption spectrophotometer. The limit of detection is 0.001 ??g gold per liter. Results from three studies indicate, respectively, that the method is precise, effective, and free of interference. Specifically, a precision study indicates that the method has a relative standard deviation of 16-18%; a recovery study indicates that the method recovers gold at an average of 93%; and an interference study indicates that the interference effects are eliminated with solvent extraction and background correction techniques. Application of the method to water samples collected from 41 sites throughout the Western United States and Alaska shows a gold concentration range of < 0.001 to 0.036 ??g gold per liter, with an average of 0.005 ??g/L. ?? 1984.

  8. Validation of a T1 and T2* leakage correction method based on multi-echo DSC-MRI using MION as a reference standard

    PubMed Central

    Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad

    2015-01-01

    Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714

  9. Automatic identification of inertial sensor placement on human body segments during walking

    PubMed Central

    2013-01-01

    Background Current inertial motion capture systems are rarely used in biomedical applications. The attachment and connection of the sensors with cables is often a complex and time consuming task. Moreover, it is prone to errors, because each sensor has to be attached to a predefined body segment. By using wireless inertial sensors and automatic identification of their positions on the human body, the complexity of the set-up can be reduced and incorrect attachments are avoided. We present a novel method for the automatic identification of inertial sensors on human body segments during walking. This method allows the user to place (wireless) inertial sensors on arbitrary body segments. Next, the user walks for just a few seconds and the segment to which each sensor is attached is identified automatically. Methods Walking data was recorded from ten healthy subjects using an Xsens MVN Biomech system with full-body configuration (17 inertial sensors). Subjects were asked to walk for about 6 seconds at normal walking speed (about 5 km/h). After rotating the sensor data to a global coordinate frame with x-axis in walking direction, y-axis pointing left and z-axis vertical, RMS, mean, and correlation coefficient features were extracted from x-, y- and z-components and magnitudes of the accelerations, angular velocities and angular accelerations. As a classifier, a decision tree based on the C4.5 algorithm was developed using Weka (Waikato Environment for Knowledge Analysis). Results and conclusions After testing the algorithm with 10-fold cross-validation using 31 walking trials (involving 527 sensors), 514 sensors were correctly classified (97.5%). When a decision tree for a lower body plus trunk configuration (8 inertial sensors) was trained and tested using 10-fold cross-validation, 100% of the sensors were correctly identified. This decision tree was also tested on walking trials of 7 patients (17 walking trials) after anterior cruciate ligament reconstruction, which also resulted in 100% correct identification, thus illustrating the robustness of the method. PMID:23517757

  10. Slurry sampling high-resolution continuum source electrothermal atomic absorption spectrometry for direct beryllium determination in soil and sediment samples after elimination of SiO interference by least-squares background correction.

    PubMed

    Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza

    2017-12-01

    In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  12. Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances

    2006-01-01

    In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.

  13. A Binary Offset Effect in CCD Readout and Its Impact on Astronomical Data

    NASA Astrophysics Data System (ADS)

    Boone, K.; Aldering, G.; Copin, Y.; Dixon, S.; Domagalski, R. S.; Gangler, E.; Pecontal, E.; Perlmutter, S.

    2018-06-01

    We have discovered an anomalous behavior of CCD readout electronics that affects their use in many astronomical applications. An offset in the digitization of the CCD output voltage that depends on the binary encoding of one pixel is added to pixels that are read out one, two, and/or three pixels later. One result of this effect is the introduction of a differential offset in the background when comparing regions with and without flux from science targets. Conventional data reduction methods do not correct for this offset. We find this effect in 16 of 22 instruments investigated, covering a variety of telescopes and many different front-end electronics systems. The affected instruments include LRIS and DEIMOS on the Keck telescopes, WFC3 UVIS and STIS on HST, MegaCam on CFHT, SNIFS on the UH88 telescope, GMOS on the Gemini telescopes, HSC on Subaru, and FORS on VLT. The amplitude of the introduced offset is up to 4.5 ADU per pixel, and it is not directly proportional to the measured ADU level. We have developed a model that can be used to detect this “binary offset effect” in data, and correct for it. Understanding how data are affected and applying a correction for the effect is essential for precise astronomical measurements.

  14. Influence of Biochemical Features of Burkholderia pseudomallei Strains on Identification Reliability by Vitek 2 System

    PubMed Central

    Zakharova, Irina B; Lopasteyskaya, Yana A; Toporkov, Andrey V; Viktorov, Dmitry V

    2018-01-01

    Background: Burkholderia pseudomallei is a Gram-negative saprophytic soil bacterium that causes melioidosis, a potentially fatal disease endemic in wet tropical areas. The currently available biochemical identification systems can misidentify some strains of B. pseudomallei. The aim of the present study was to identify the biochemical features of B. pseudomallei, which can affect its correct identification by Vitek 2 system. Materials and Methods: The biochemical patterns of 40 B. pseudomallei strains were obtained using Vitek 2 GN cards. The average contribution of biochemical tests in overall dissimilarities between correctly and incorrectly identified strains was assessed using nonmetric multidimensional scaling. Results: It was found (R statistic of 0.836, P = 0.001) that a combination of negative N-acetyl galactosaminidase, β-N-acetyl glucosaminidase, phosphatase, and positive D-cellobiase (dCEL), tyrosine arylamidase (TyrA), and L-proline arylamidase (ProA) tests leads to low discrimination of B. pseudomallei, whereas a set of positive dCEL and negative N-acetyl galactosaminidase, TyrA, and ProA determines the wrong identification of B. pseudomallei as Burkholderia cepacia complex. Conclusion: The further expansion of the Vitek 2 identification keys is needed for correct identification of atypical or regionally distributed biochemical profiles of B. pseudomallei. PMID:29563716

  15. [Dislocation of the PIP-Joint - Treatment of a common (ball)sports injury].

    PubMed

    Müller-Seubert, Wibke; Bührer, Gregor; Horch, Raymund E

    2017-09-01

    Background  Fractures or fracture dislocations of the proximal interphalangeal joint often occur during sports or accidents. Dislocations of the PIP-joint are the most common ligamentary injuries of the hand. As this kind of injury is so frequent, hand surgeons and other physicians should be aware of the correct treatment. Objectives  This paper summarises the most common injury patterns and the correct treatment of PIP-joint dislocations. Materials and Methods  This paper reviews the current literature and describes the standardised treatment of PIP-joint dislocations. Results  What is most important is that reposition is anatomically correct, and this should be controlled by X-ray examination. Depending on the instability and possible combination with other injuries (e. g. injury to the palmar plate), early functional physiotherapy of the joint or a short immobilisation period is indicated. Conclusions  Early functional treatment of the injured PIP-joint, initially using buddy taping, is important to restore PIP-joint movement and function. Depending on the injury, joint immobilisation using a K-wire may be indicated. Detailed informed consent is necessary to explain to the patient the severity of the injury and possible complications, such as chronic functional disorders or development of arthrosis. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    PubMed

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  17. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    NASA Astrophysics Data System (ADS)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  18. [Evaluation of four dark object atmospheric correction methods based on ZY-3 CCD data].

    PubMed

    Guo, Hong; Gu, Xing-fa; Xie, Yong; Yu, Tao; Gao, Hai-liang; Wei, Xiang-qin; Liu, Qi-yue

    2014-08-01

    The present paper performed the evaluation of four dark-object subtraction(DOS) atmospheric correction methods based on 2012 Inner Mongolia experimental data The authors analyzed the impacts of key parameters of four DOS methods when they were applied to ZY-3 CCD data The results showed that (1) All four DOS methods have significant atmospheric correction effect at band 1, 2 and 3. But as for band 4, the atmospheric correction effect of DOS4 is the best while DOS2 is the worst; both DOS1 and DOS3 has no obvious atmospheric correction effect. (2) The relative error (RE) of DOS1 atmospheric correction method is larger than 10% at four bands; The atmospheric correction effect of DOS2 works the best at band 1(AE (absolute error)=0.0019 and RE=4.32%) and the worst error appears at band 4(AE=0.0464 and RE=19.12%); The RE of DOS3 is about 10% for all bands. (3) The AE of atmospheric correction results for DOS4 method is less than 0. 02 and the RE is less than 10% for all bands. Therefore, the DOS4 method provides the best accuracy of atmospheric correction results for ZY-3 image.

  19. Enhancing the prediction of protein pairings between interacting families using orthology information

    PubMed Central

    Izarzugaza, Jose MG; Juan, David; Pons, Carles; Pazos, Florencio; Valencia, Alfonso

    2008-01-01

    Background It has repeatedly been shown that interacting protein families tend to have similar phylogenetic trees. These similarities can be used to predicting the mapping between two families of interacting proteins (i.e. which proteins from one family interact with which members of the other). The correct mapping will be that which maximizes the similarity between the trees. The two families may eventually comprise orthologs and paralogs, if members of the two families are present in more than one organism. This fact can be exploited to restrict the possible mappings, simply by impeding links between proteins of different organisms. We present here an algorithm to predict the mapping between families of interacting proteins which is able to incorporate information regarding orthologues, or any other assignment of proteins to "classes" that may restrict possible mappings. Results For the first time in methods for predicting mappings, we have tested this new approach on a large number of interacting protein domains in order to statistically assess its performance. The method accurately predicts around 80% in the most favourable cases. We also analysed in detail the results of the method for a well defined case of interacting families, the sensor and kinase components of the Ntr-type two-component system, for which up to 98% of the pairings predicted by the method were correct. Conclusion Based on the well established relationship between tree similarity and interactions we developed a method for predicting the mapping between two interacting families using genomic information alone. The program is available through a web interface. PMID:18215279

  20. Method for the determination of cobalt from biological products with graphite furnace atomic absorption spectrometer

    NASA Astrophysics Data System (ADS)

    Zamfir, Oana-Liliana; Ionicǎ, Mihai; Caragea, Genica; Radu, Simona; Vlǎdescu, Marian

    2016-12-01

    Cobalt is a chemical element with symbol Co and atomic number 27 and atomic weight 58.93. 59 Co is the only stable cobalt isotope and the only isotope to exist naturally on Earth. Cobalt is the active center of coenzymes called cobalamin or cyanocobalamin the most common example of which is vitamin B12. Vitamin B12 deficiency can potentially cause severe and irreversible damage, especially to the brain and nervous system in the form of fatigue, depression and poor memory or even mania and psychosis. In order to study the degree of deficiency of the population with Co or the correctness of treatment with vitamin B12, a modern optoelectronic method for the determination of metals and metalloids from biological samples has been developed, Graphite Furnace - Atomic Absorption Spectrometer (GF- AAS) method is recommended. The technique is based on the fact that free atoms will absorb light at wavelengths characteristic of the element of interest. Free atoms of the chemical element can be produced from samples by the application of high temperatures. The system GF-AAS Varian used as biological samples, blood or urine that followed the digest of the organic matrix. For the investigations was used a high - performance GF-AAS with D2 - background correction system and a transversely heated graphite atomizer. As result of the use of the method are presented the concentration of Co in the blood or urine of a group of patient in Bucharest. The method is sensitive, reproducible relatively easy to apply, with a moderately costs.

  1. Long-term results of forearm lengthening and deformity correction by the Ilizarov method.

    PubMed

    Orzechowski, Wiktor; Morasiewicz, Leszek; Krawczyk, Artur; Dragan, Szymon; Czapiński, Jacek

    2002-06-30

    Background. Shortening and deformity of the forearm is most frequently caused by congenital disorders or posttraumatic injury. Given its complex anatomy and biomechanics, the forearm is clearly the most difficult segment for lengthening and deformity correction.
    Material and methods. We analyzed 16 patients with shortening and deformity of the forearm, treated surgically, using the Ilizarov method in our Department from 1989 to 2001. in 9 cases 1-stage surgery was sufficient, while the remaining 7 patients underwent 2-5 stages of treatment. At total of 31 surgical operations were performed. The extent of forearm shortening ranged from 1,5 to 14,5 cm (5-70%). We development a new fixator based on Schanz half-pins.
    Results. The length of forearm lengthening per operative stage averaged 2,35 cm. the proportion of lengthening ranged from 6% to 48% with an average of 18,3%. The mean lengthening index was 48,15 days/cm. the per-patient rate of complications was 88% compared 45% per stage of treatment, mostly limited rotational mobility and abnormal consolidation of regenerated bone.
    Conclusions. Despite the high complication rate, the Ilizarov method is the method of choice for patients with forearm shortenings and deformities. Treatment is particularly indicated in patients with shortening caused by disproportionate length of the ulnar and forearm bones. Treatment should be managed so as cause the least possible damage to arm function, even at the cost of limited lengthening. Our new stabilizer based on Schanz half-pins makes it possible to preserve forearm rotation.

  2. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    PubMed

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  3. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients.

    PubMed

    Chen, Weitian; Sica, Christopher T; Meyer, Craig H

    2008-11-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.

  4. Computed Intranasal Spray Penetration: Comparisons Before and After Nasal Surgery

    PubMed Central

    Frank, Dennis O.; Kimbell, Julia S.; Cannon, Daniel; Rhee, John S.

    2012-01-01

    Background Quantitative methods for comparing intranasal drug delivery efficiencies pre- and postoperatively have not been fully utilized. The objective of this study is to use computational fluid dynamics techniques to evaluate aqueous nasal spray penetration efficiencies before and after surgical correction of intranasal anatomic deformities. Methods Ten three-dimensional models of the nasal cavities were created from pre- and postoperative computed tomography scans in five subjects. Spray simulations were conducted using a particle size distribution ranging from 10–110μm, a spray speed of 3m/s, plume angle of 68°, and with steady state, resting inspiratory airflow present. Two different nozzle positions were compared. Statistical analysis was conducted using Student T-test for matched pairs. Results On the obstructed side, posterior particle deposition after surgery increased by 118% and was statistically significant (p-value=0.036), while anterior particle deposition decreased by 13% and was also statistically significant (p-value=0.020). The fraction of particles that by-passed the airways either pre- or post-operatively was less than 5%. Posterior particle deposition differences between obstructed and contralateral sides of the airways were 113% and 30% for pre- and post-surgery, respectively. Results showed that nozzle positions can influence spray delivery. Conclusions Simulations predicted that surgical correction of nasal anatomic deformities can improve spray penetration to areas where medications can have greater effect. Particle deposition patterns between both sides of the airways are more evenly distributed after surgery. These findings suggest that correcting anatomic deformities may improve intranasal medication delivery. For enhanced particle penetration, patients with nasal deformities may explore different nozzle positions. PMID:22927179

  5. Combining MRI With PET for Partial Volume Correction Improves Image-Derived Input Functions in Mice

    NASA Astrophysics Data System (ADS)

    Evans, Eleanor; Buonincontri, Guido; Izquierdo, David; Methner, Carmen; Hawkes, Rob C.; Ansorge, Richard E.; Krieg, Thomas; Carpenter, T. Adrian; Sawiak, Stephen J.

    2015-06-01

    Accurate kinetic modelling using dynamic PET requires knowledge of the tracer concentration in plasma, known as the arterial input function (AIF). AIFs are usually determined by invasive blood sampling, but this is prohibitive in murine studies due to low total blood volumes. As a result of the low spatial resolution of PET, image-derived input functions (IDIFs) must be extracted from left ventricular blood pool (LVBP) ROIs of the mouse heart. This is challenging because of partial volume and spillover effects between the LVBP and myocardium, contaminating IDIFs with tissue signal. We have applied the geometric transfer matrix (GTM) method of partial volume correction (PVC) to 12 mice injected with 18F - FDG affected by a Myocardial Infarction (MI), of which 6 were treated with a drug which reduced infarction size [1]. We utilised high resolution MRI to assist in segmenting mouse hearts into 5 classes: LVBP, infarcted myocardium, healthy myocardium, lungs/body and background. The signal contribution from these 5 classes was convolved with the point spread function (PSF) of the Cambridge split magnet PET scanner and a non-linear fit was performed on the 5 measured signal components. The corrected IDIF was taken as the fitted LVBP component. It was found that the GTM PVC method could recover an IDIF with less contamination from spillover than an IDIF extracted from PET data alone. More realistic values of Ki were achieved using GTM IDIFs, which were shown to be significantly different (p <; 0.05) between the treated and untreated groups.

  6. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  7. Misidentification of sex for Lampsilis teres, Yellow Sandshell, and its implications for mussel conservation and wildlife management.

    PubMed

    Hess, Megan C; Inoue, Kentaro; Tsakiris, Eric T; Hart, Michael; Morton, Jennifer; Dudding, Jack; Robertson, Clinton R; Randklev, Charles R

    2018-01-01

    Correct identification of sex is an important component of wildlife management because changes in sex ratios can affect population viability. Identification of sex often relies on external morphology, which can be biased by intermediate or nondistinctive morphotypes and observer experience. For unionid mussels, research has demonstrated that species misidentification is common but less attention has been given to the reliability of sex identification. To evaluate whether this is an issue, we surveyed 117 researchers on their ability to correctly identify sex of Lampsilis teres (Yellow Sandshell), a wide ranging, sexually dimorphic species. Personal background information of each observer was analyzed to identify factors that may contribute to misidentification of sex. We found that median misidentification rates were ~20% across males and females and that observers falsely identified the number of female specimens more often (~23%) than males (~10%). Misidentification rates were partially explained by geographic region of prior mussel experience and where observers learned how to identify mussels, but there remained substantial variation among observers after controlling for these factors. We also used three morphometric methods (traditional, geometric, and Fourier) to investigate whether sex could be more correctly identified statistically and found that misidentification rates for the geometric and Fourier methods (which characterize shape) were less than 5% (on average 7% and 2% for females and males, respectively). Our results show that misidentification of sex is likely common for mussels if based solely on external morphology, which raises general questions, regardless of taxonomic group, about its reliability for conservation efforts.

  8. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  9. Inverse Compton Scattering in Mildly Relativistic Plasma

    NASA Technical Reports Server (NTRS)

    Molnar, S. M.; Birkinshaw, M.

    1998-01-01

    We investigated the effect of inverse Compton scattering in mildly relativistic static and moving plasmas with low optical depth using Monte Carlo simulations, and calculated the Sunyaev-Zel'dovich effect in the cosmic background radiation. Our semi-analytic method is based on a separation of photon diffusion in frequency and real space. We use Monte Carlo simulation to derive the intensity and frequency of the scattered photons for a monochromatic incoming radiation. The outgoing spectrum is determined by integrating over the spectrum of the incoming radiation using the intensity to determine the correct weight. This method makes it possible to study the emerging radiation as a function of frequency and direction. As a first application we have studied the effects of finite optical depth and gas infall on the Sunyaev-Zel'dovich effect (not possible with the extended Kompaneets equation) and discuss the parameter range in which the Boltzmann equation and its expansions can be used. For high temperature clusters (k(sub B)T(sub e) greater than or approximately equal to 15 keV) relativistic corrections based on a fifth order expansion of the extended Kompaneets equation seriously underestimate the Sunyaev-Zel'dovich effect at high frequencies. The contribution from plasma infall is less important for reasonable velocities. We give a convenient analytical expression for the dependence of the cross-over frequency on temperature, optical depth, and gas infall speed. Optical depth effects are often more important than relativistic corrections, and should be taken into account for high-precision work, but are smaller than the typical kinematic effect from cluster radial velocities.

  10. A software package to improve image quality and isolation of objects of interest for quantitative stereology studies of rat hepatocarcinogenesis.

    PubMed

    Xu, Yihua; Pitot, Henry C

    2006-03-01

    In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.

  11. Protocadherin α (PCDHA) as a novel susceptibility gene for autism

    PubMed Central

    Anitha, Ayyappan; Thanseem, Ismail; Nakamura, Kazuhiko; Yamada, Kazuo; Iwayama, Yoshimi; Toyota, Tomoko; Iwata, Yasuhide; Suzuki, Katsuaki; Sugiyama, Toshiro; Tsujii, Masatsugu; Yoshikawa, Takeo; Mori, Norio

    2013-01-01

    Background Synaptic dysfunction has been shown to be involved in the pathogenesis of autism. We hypothesized that the protocadherin α gene cluster (PCDHA), which is involved in synaptic specificity and in serotonergic innervation of the brain, could be a suitable candidate gene for autism. Methods We examined 14 PCDHA single nucleotide polymorphisms (SNPs) for genetic association with autism in DNA samples of 3211 individuals (841 families, including 574 multiplex families) obtained from the Autism Genetic Resource Exchange. Results Five SNPs (rs251379, rs1119032, rs17119271, rs155806 and rs17119346) showed significant associations with autism. The strongest association (p < 0.001) was observed for rs1119032 (z score of risk allele G = 3.415) in multiplex families; SNP associations withstand multiple testing correction in multiplex families (p = 0.041). Haplotypes involving rs1119032 showed very strong associations with autism, withstanding multiple testing corrections. In quantitative transmission disequilibrium testing of multiplex families, the G allele of rs1119032 showed a significant association (p = 0.033) with scores on the Autism Diagnostic Interview–Revised (ADI-R)_D (early developmental abnormalities). We also found a significant difference in the distribution of ADI-R_A (social interaction) scores between the A/A, A/G and G/G genotypes of rs17119346 (p = 0.002). Limitations Our results should be replicated in an independent population and/or in samples of different racial backgrounds. Conclusion Our study provides strong genetic evidence of PCDHA as a potential candidate gene for autism. PMID:23031252

  12. Report to the National Park Service for Permit LAKE-2014-SCI-002

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnley, Pamela C.

    The overall purpose of the study is to determine how to use existing geologic data to predict gamma-ray background levels as measured during aerial radiological surveys. Aerial radiological surveys have typically been for resource exploration purposes but are now also used for homeland security purposes and nuclear disaster assessment as well as determining the depth of snowpack. Foreknowledge of the background measured during aerial radiological survey will be valuable for all the above applications. The gamma-ray background comes from the rocks and soil within the first 30 cm of the earth’s surface in the area where the survey is beingmore » made. The background should therefore be predictable based on an understanding of the distribution and geochemistry of the rocks on the surface. We are using a combination of geologic maps, remote sensing imagery and geochemical data from existing databases and the scientific literature to develop a method for predicting gamma-ray backgrounds. As part of this project we have an opportunity to ground truth our technique along a survey calibration line near Lake Mojave that is used by the Remote Sensing Lab (RSL) of National Security Technologies, LLC (NSTec). RSL makes aerial measurements along this line on a regular basis, so the aerial background in the area is well known. By making ground-based measurements of the gamma-ray background and detailed observations of the geology of the ground surface as well as local topography we will have the data we need to make corrections to the models we build based on the remote sensing and geologic data. Our project involves collaborators from the Airborne Geophysics Section of the Geological Survey of Canada as well as from NSTec’s RSL. Findings« less

  13. Improved volumetric measurement of brain structure with a distortion correction procedure using an ADNI phantom.

    PubMed

    Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi

    2013-06-01

    Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.

  14. Loop corrections to primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Boran, Sibel; Kahya, E. O.

    2018-02-01

    We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.

  15. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    PubMed

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  16. Earth Tide Analysis Specifics in Case of Unstable Aquifer Regime

    NASA Astrophysics Data System (ADS)

    Vinogradov, Evgeny; Gorbunova, Ella; Besedina, Alina; Kabychenko, Nikolay

    2017-06-01

    We consider the main factors that affect underground water flow including aquifer supply, collector state, and distant earthquakes seismic waves' passage. In geodynamically stable conditions underground inflow change can significantly distort hydrogeological response to Earth tides, which leads to the incorrect estimation of phase shift between tidal harmonics of ground displacement and water level variations in a wellbore. Besides an original approach to phase shift estimation that allows us to get one value per day for the semidiurnal M2 wave, we offer the empirical method of excluding periods of time that are strongly affected by high inflow. In spite of rather strong ground motion during earthquake waves' passage, we did not observe corresponding phase shift change against the background on significant recurrent variations due to fluctuating inflow influence. Though inflow variations do not look like the only important parameter that must be taken into consideration while performing phase shift analysis, permeability estimation is not adequate without correction based on background alternations of aquifer parameters due to natural and anthropogenic reasons.

  17. Earth Tide Analysis Specifics in Case of Unstable Aquifer Regime

    NASA Astrophysics Data System (ADS)

    Vinogradov, Evgeny; Gorbunova, Ella; Besedina, Alina; Kabychenko, Nikolay

    2018-05-01

    We consider the main factors that affect underground water flow including aquifer supply, collector state, and distant earthquakes seismic waves' passage. In geodynamically stable conditions underground inflow change can significantly distort hydrogeological response to Earth tides, which leads to the incorrect estimation of phase shift between tidal harmonics of ground displacement and water level variations in a wellbore. Besides an original approach to phase shift estimation that allows us to get one value per day for the semidiurnal M2 wave, we offer the empirical method of excluding periods of time that are strongly affected by high inflow. In spite of rather strong ground motion during earthquake waves' passage, we did not observe corresponding phase shift change against the background on significant recurrent variations due to fluctuating inflow influence. Though inflow variations do not look like the only important parameter that must be taken into consideration while performing phase shift analysis, permeability estimation is not adequate without correction based on background alternations of aquifer parameters due to natural and anthropogenic reasons.

  18. Establishing geochemical background levels of selected trace elements in areas having geochemical anomalies: The case study of the Orbetello lagoon (Tuscany, Italy).

    PubMed

    Romano, Elena; Bergamin, Luisa; Croudace, Ian W; Ausili, Antonella; Maggi, Chiara; Gabellini, Massimo

    2015-07-01

    The determination of background concentration values (BGVs) in areas, characterised by the presence of natural geochemical anomalies and anthropogenic impact, appears essential for a correct pollution assessment. For this purpose, it is necessary to establish a reliable method for determination of local BGVs. The case of the Orbetello lagoon, a geologically complex area characterized by Tertiary volcanism, is illustrated. The vertical concentration profiles of As, Cd, Cr, Cu, Hg, Ni, Pb and Zn were studied in four sediment cores. Local BGVs were determined considering exclusively samples not affected by anthropogenic influence, recognized by means of multivariate statistics and radiochronological dating ((137)Cs and (210)Pb). Results showed BGVs well-comparable with mean crustal or shale values for most of the considered elements except for Hg (0.87 mg/kg d.w.) and As (16.87 mg/kg d.w.), due to mineralization present in the catchment basin draining into the lagoon. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Rejection of randomly coinciding 2ν2β events in ZnMoO4 scintillating bolometers

    NASA Astrophysics Data System (ADS)

    Chernyak, D. M.; Danevich, F. A.; Giuliani, A.; Mancuso, M.; Nones, C.; Olivieri, E.; Tenconi, M.; Tretyak, V. I.

    2014-01-01

    Random coincidence of 2ν2β decay events could be one of the main sources of background for 0ν2β decay in cryogenic bolometers due to their poor time resolution. Pulse-shape discrimination by using front edge analysis, the mean-time and χ2 methods was applied to discriminate randomly coinciding 2ν2β events in ZnMoO4 cryogenic scintillating bolometers. The background can be effectively rejected on the level of 99% by the mean-time analysis of heat signals with the rise time about 14 ms and the signal-to-noise ratio 900, and on the level of 98% for the light signals with 3 ms rise time and signal-to-noise ratio of 30 (under a requirement to detect 95% of single events). Importance of the signal-to-noise ratio, correct finding of the signal start and choice of an appropriate sampling frequency are discussed.

  20. A theoretical and practical clarification on the calculation of reflection loss for microwave absorbing materials

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Zhao, Kun; Drew, Michael G. B.; Liu, Yue

    2018-01-01

    Reflection loss is usually calculated and reported as a function of the thickness of microwave absorption material. However, misleading results are often obtained since the principles imbedded in the popular methods contradict the fundamental facts that electromagnetic waves cannot be reflected in a uniform material except when there is an interface and that there are important differences between the concepts of characteristic impedance and input impedance. In this paper, these inconsistencies have been analyzed theoretically and corrections provided. The problems with the calculations indicate a gap between the background knowledge of material scientists and microwave engineers and for that reason a concise review of transmission line theory is provided along with the mathematical background needed for a deeper understanding of the theory of reflection loss. The expressions of gradient, divergence, Laplacian, and curl operators in a general orthogonal coordinate system have been presented including the concept of reciprocal vectors. Gauss's and Stokes's theorems have been related to Green's theorem in a novel way.

  1. Molecular docking.

    PubMed

    Morris, Garrett M; Lim-Wilby, Marguerita

    2008-01-01

    Molecular docking is a key tool in structural molecular biology and computer-assisted drug design. The goal of ligand-protein docking is to predict the predominant binding mode(s) of a ligand with a protein of known three-dimensional structure. Successful docking methods search high-dimensional spaces effectively and use a scoring function that correctly ranks candidate dockings. Docking can be used to perform virtual screening on large libraries of compounds, rank the results, and propose structural hypotheses of how the ligands inhibit the target, which is invaluable in lead optimization. The setting up of the input structures for the docking is just as important as the docking itself, and analyzing the results of stochastic search methods can sometimes be unclear. This chapter discusses the background and theory of molecular docking software, and covers the usage of some of the most-cited docking software.

  2. Staircase-scene-based nonuniformity correction in aerial point target detection systems.

    PubMed

    Huo, Lijun; Zhou, Dabiao; Wang, Dejiang; Liu, Rang; He, Bin

    2016-09-01

    Focal-plane arrays (FPAs) are often interfered by heavy fixed-pattern noise, which severely degrades the detection rate and increases the false alarms in airborne point target detection systems. Thus, high-precision nonuniformity correction is an essential preprocessing step. In this paper, a new nonuniformity correction method is proposed based on a staircase scene. This correction method can compensate for the nonlinear response of the detector and calibrate the entire optical system with computational efficiency and implementation simplicity. Then, a proof-of-concept point target detection system is established with a long-wave Sofradir FPA. Finally, the local standard deviation of the corrected image and the signal-to-clutter ratio of the Airy disk of a Boeing B738 are measured to evaluate the performance of the proposed nonuniformity correction method. Our experimental results demonstrate that the proposed correction method achieves high-quality corrections.

  3. Progressing beyond SLMTA: Are internal audits and corrective action the key drivers of quality improvement?

    PubMed Central

    Mengo, Doris M.; Mohamud, Abdikher D.; Ochieng, Susan M.; Milgo, Sammy K.; Sexton, Connie J.; Moyo, Sikhulile; Luman, Elizabeth T.

    2014-01-01

    Background Kenya has implemented the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme to facilitate quality improvement in medical laboratories and to support national accreditation goals. Continuous quality improvement after SLMTA completion is needed to ensure sustainability and continue progress toward accreditation. Methods Audits were conducted by qualified, independent auditors to assess the performance of five enrolled laboratories using the Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist. End-of-programme (exit) and one year post-programme (surveillance) audits were compared for overall score, star level (from zero to five, based on scores) and scores for each of the 12 Quality System Essential (QSE) areas that make up the SLIPTA checklist. Results All laboratories improved from exit to surveillance audit (median improvement 38 percentage points, range 5–45 percentage points). Two laboratories improved from zero to one star, two improved from zero to three stars and one laboratory improved from three to four stars. The lowest median QSE scores at exit were: internal audit; corrective action; and occurrence management and process improvement (< 20%). Each of the 12 QSEs improved substantially at surveillance audit, with the greatest improvement in client management and customer service, internal audit and information management (≥ 50 percentage points). The two laboratories with the greatest overall improvement focused heavily on the internal audit and corrective action QSEs. Conclusion Whilst all laboratories improved from exit to surveillance audit, those that focused on the internal audit and corrective action QSEs improved substantially more than those that did not; internal audits and corrective actions may have acted as catalysts, leading to improvements in other QSEs. Systematic identification of core areas and best practices to address them is a critical step toward strengthening public medical laboratories. PMID:29043193

  4. Efficacy and Safety of a Hyaluronic Acid Filler to Correct Aesthetically Detracting or Deficient Features of the Asian Nose: A Prospective, Open-Label, Long-Term Study

    PubMed Central

    Liew, Steven; Scamp, Terrence; de Maio, Mauricio; Halstead, Michael; Johnston, Nicole; Silberberg, Michael; Rogers, John D.

    2016-01-01

    Background There is increasing interest among patients and plastic surgeons for alternatives to rhinoplasty, a common surgical procedure performed in Asia. Objectives To evaluate the safety, efficacy, and longevity of a hyaluronic acid filler in the correction of aesthetically detracting or deficient features of the Asian nose. Methods Twenty-nine carefully screened Asian patients had their noses corrected with the study filler (Juvéderm VOLUMA [Allergan plc, Dublin, Ireland] with lidocaine injectable gel), reflecting individualized treatment goals and utilizing a standardized injection procedure, and were followed for over 12 months. Results A clinically meaningful correction (≥1 grade improvement on the Assessment of Aesthetic Improvement Scale) was achieved in 27 (93.1%) patients at the first follow-up visit. This was maintained in 28 (96.6%) patients at the final visit, based on the independent assessments of a central non-injecting physician and the patients. At this final visit, 23 (79.3%) patients were satisfied or very satisfied with the study filler and 25 (86.2%) would recommend it to others. In this small series of patients, there were no serious adverse events (AEs), with all treatment-related AEs being mild to moderate, transient injection site reactions, unrelated to the study filler. Conclusions Using specific eligibility criteria, individualized treatment goals, and a standardized injection procedure, the study filler corrected aesthetically detracting or deficient features of the Asian nose, with the therapeutic effects lasting for over 12 months, consistent with a high degree of patient satisfaction. This study supports the safety and efficacy of this HA filler for specific nose augmentation procedures in selected Asian patients. Level of Evidence: 3 Therapeutic PMID:27301371

  5. Gaussian decomposition of high-resolution melt curve derivatives for measuring genome-editing efficiency

    PubMed Central

    Zaboikin, Michail; Freter, Carl

    2018-01-01

    We describe a method for measuring genome editing efficiency from in silico analysis of high-resolution melt curve data. The melt curve data derived from amplicons of genome-edited or unmodified target sites were processed to remove the background fluorescent signal emanating from free fluorophore and then corrected for temperature-dependent quenching of fluorescence of double-stranded DNA-bound fluorophore. Corrected data were normalized and numerically differentiated to obtain the first derivatives of the melt curves. These were then mathematically modeled as a sum or superposition of minimal number of Gaussian components. Using Gaussian parameters determined by modeling of melt curve derivatives of unedited samples, we were able to model melt curve derivatives from genetically altered target sites where the mutant population could be accommodated using an additional Gaussian component. From this, the proportion contributed by the mutant component in the target region amplicon could be accurately determined. Mutant component computations compared well with the mutant frequency determination from next generation sequencing data. The results were also consistent with our earlier studies that used difference curve areas from high-resolution melt curves for determining the efficiency of genome-editing reagents. The advantage of the described method is that it does not require calibration curves to estimate proportion of mutants in amplicons of genome-edited target sites. PMID:29300734

  6. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.

    2009-06-01

    A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.

  7. Computational AeroAcoustics for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Ed; Hixon, Ray; Dyson, Rodger; Huff, Dennis (Technical Monitor)

    2002-01-01

    An overview of the current state-of-the-art in computational aeroacoustics as applied to fan noise prediction at NASA Glenn is presented. Results from recent modeling efforts using three dimensional inviscid formulations in both frequency and time domains are summarized. In particular, the application of a frequency domain method, called LINFLUX, to the computation of rotor-stator interaction tone noise is reviewed and the influence of the background inviscid flow on the acoustic results is analyzed. It has been shown that the noise levels are very sensitive to the gradients of the mean flow near the surface and that the correct computation of these gradients for highly loaded airfoils is especially problematic using an inviscid formulation. The ongoing development of a finite difference time marching code that is based on a sixth order compact scheme is also reviewed. Preliminary results from the nonlinear computation of a gust-airfoil interaction model problem demonstrate the fidelity and accuracy of this approach. Spatial and temporal features of the code as well as its multi-block nature are discussed. Finally, latest results from an ongoing effort in the area of arbitrarily high order methods are reviewed and technical challenges associated with implementing correct high order boundary conditions are discussed and possible strategies for addressing these challenges ore outlined.

  8. Rapid Detection and Subtyping of Human Influenza A Viruses and Reassortants by Pyrosequencing

    PubMed Central

    Deng, Yi-Mo; Caldwell, Natalie; Barr, Ian G.

    2011-01-01

    Background Given the continuing co-circulation of the 2009 H1N1 pandemic influenza A viruses with seasonal H3N2 viruses, rapid and reliable detection of newly emerging influenza reassortant viruses is important to enhance our influenza surveillance. Methodology/Principal Findings A novel pyrosequencing assay was developed for the rapid identification and subtyping of potential human influenza A virus reassortants based on all eight gene segments of the virus. Except for HA and NA genes, one universal set of primers was used to amplify and subtype each of the six internal genes. With this method, all eight gene segments of 57 laboratory isolates and 17 original specimens of seasonal H1N1, H3N2 and 2009 H1N1 pandemic viruses were correctly matched with their corresponding subtypes. In addition, this method was shown to be capable of detecting reassortant viruses by correctly identifying the source of all 8 gene segments from three vaccine production reassortant viruses and three H1N2 viruses. Conclusions/Significance In summary, this pyrosequencing assay is a sensitive and specific procedure for screening large numbers of viruses for reassortment events amongst the commonly circulating human influenza A viruses, which is more rapid and cheaper than using conventional sequencing approaches. PMID:21886790

  9. RSA and its Correctness through Modular Arithmetic

    NASA Astrophysics Data System (ADS)

    Meelu, Punita; Malik, Sitender

    2010-11-01

    To ensure the security to the applications of business, the business sectors use Public Key Cryptographic Systems (PKCS). An RSA system generally belongs to the category of PKCS for both encryption and authentication. This paper describes an introduction to RSA through encryption and decryption schemes, mathematical background which includes theorems to combine modular equations and correctness of RSA. In short, this paper explains some of the maths concepts that RSA is based on, and then provides a complete proof that RSA works correctly. We can proof the correctness of RSA through combined process of encryption and decryption based on the Chinese Remainder Theorem (CRT) and Euler theorem. However, there is no mathematical proof that RSA is secure, everyone takes that on trust!.

  10. Delegation in Correctional Nursing Practice.

    PubMed

    Tompkins, Frances

    2016-07-01

    Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. © The Author(s) 2016.

  11. Class III correction using an inter-arch spring-loaded module

    PubMed Central

    2014-01-01

    Background A retrospective study was conducted to determine the cephalometric changes in a group of Class III patients treated with the inter-arch spring-loaded module (CS2000®, Dynaflex, St. Ann, MO, USA). Methods Thirty Caucasian patients (15 males, 15 females) with an average pre-treatment age of 9.6 years were treated consecutively with this appliance and compared with a control group of subjects from the Bolton-Brush Study who were matched in age, gender, and craniofacial morphology to the treatment group. Lateral cephalograms were taken before treatment and after removal of the CS2000® appliance. The treatment effects of the CS2000® appliance were calculated by subtracting the changes due to growth (control group) from the treatment changes. Results All patients were improved to a Class I dental arch relationship with a positive overjet. Significant sagittal, vertical, and angular changes were found between the pre- and post-treatment radiographs. With an average treatment time of 1.3 years, the maxillary base moved forward by 0.8 mm, while the mandibular base moved backward by 2.8 mm together with improvements in the ANB and Wits measurements. The maxillary incisor moved forward by 1.3 mm and the mandibular incisor moved forward by 1.0 mm. The maxillary molar moved forward by 1.0 mm while the mandibular molar moved backward by 0.6 mm. The average overjet correction was 3.9 mm and 92% of the correction was due to skeletal contribution and 8% was due to dental contribution. The average molar correction was 5.2 mm and 69% of the correction was due to skeletal contribution and 31% was due to dental contribution. Conclusions Mild to moderate Class III malocclusion can be corrected using the inter-arch spring-loaded appliance with minimal patient compliance. The overjet correction was contributed by forward movement of the maxilla, backward and downward movement of the mandible, and proclination of the maxillary incisors. The molar relationship was corrected by mesialization of the maxillary molars, distalization of the mandibular molars together with a rotation of the occlusal plane. PMID:24934153

  12. A diffusion-based truncated projection artifact reduction method for iterative digital breast tomosynthesis reconstruction

    PubMed Central

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M

    2014-01-01

    Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346

  13. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2016-04-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  14. ForCent model development and testing using the Enriched Background Isotope Study experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parton, W.J.; Hanson, P. J.; Swanston, C.

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulatesmore » the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less

  15. ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parton, William; Hanson, Paul J; Swanston, Chris

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamicsmore » of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less

  16. Multimodal intraoperative neuromonitoring in corrective surgery for adolescent idiopathic scoliosis: Evaluation of 354 consecutive cases

    PubMed Central

    Kundnani, Vishal K; Zhu, Lisa; Tak, HH; Wong, HK

    2010-01-01

    Background: Multimodal intraoperative neuromonitoring is recommended during corrective spinal surgery, and has been widely used in surgery for spinal deformity with successful outcomes. Despite successful outcomes of corrective surgery due to increased safety of the patients with the usage of spinal cord monitoring in many large spine centers, this modality has not yet achieved widespread popularity. We report the analysis of prospectively collected intraoperative neurophysiological monitoring data of 354 consecutive patients undergoing corrective surgery for adolescent idiopathic scoliosis (AIS) to establish the efficacy of multimodal neuromonitoring and to evaluate comparative sensitivity and specificity. Materials and Methods: The study group consisted of 354 (female = 309; male = 45) patients undergoing spinal deformity corrective surgery between 2004 and 2008. Patients were monitored using electrophysiological methods including somatosensory-evoked potentials and motor-evoked potentials simultaneously. Results: Mean age of patients was 13.6 years (±2.3 years). The operative procedures involved were instrumented fusion of the thoracic/lumbar/both curves, Baseline somatosensory-evoked potentials (SSEP) and neurogenic motor-evoked potentials (NMEP) were recorded successfully in all cases. Thirteen cases expressed significant alert to prompt reversal of intervention. All these 13 cases with significant alert had detectable NMEP alerts, whereas significant SSEP alert was detected in 8 cases. Two patients awoke with new neurological deficit (0.56%) and had significant intraoperative SSEP + NMEP alerts. There were no false positives with SSEP (high specificity) but 5 patients with false negatives with SSEP (38%) reduced its sensitivity. There was no false negative with NMEP but 2 of 13 cases were false positive with NMEP (15%). The specificity of SSEP (100%) is higher than NMEP (96%); however, the sensitivity of NMEP (100%) is far better than SSEP (51%). Due to these results, the overall sensitivity, specificity and positive predictive value of combined multimodality neuromonitoring in this adult deformity series was 100, 98.5 and 85%, respectively. Conclusion: Neurogenic motor-evoked potential (NMEP) monitoring appears to be superior to conventional SSEP monitoring for identifying evolving spinal cord injury. Used in conjunction, the sensitivity and specificity of combined neuromonitoring may reach up to 100%. Multimodality monitoring with SSEP + NMEP should be the standard of care. PMID:20165679

  17. A UMLS-based spell checker for natural language processing in vaccine safety

    PubMed Central

    Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C

    2007-01-01

    Background The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. Methods We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. Results We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. Conclusion We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest. PMID:17295907

  18. General properties of the Foldy-Wouthuysen transformation and applicability of the corrected original Foldy-Wouthuysen method

    NASA Astrophysics Data System (ADS)

    Silenko, Alexander J.

    2016-02-01

    General properties of the Foldy-Wouthuysen transformation which is widely used in quantum mechanics and quantum chemistry are considered. Merits and demerits of the original Foldy-Wouthuysen transformation method are analyzed. While this method does not satisfy the Eriksen condition of the Foldy-Wouthuysen transformation, it can be corrected with the use of the Baker-Campbell-Hausdorff formula. We show a possibility of such a correction and propose an appropriate algorithm of calculations. An applicability of the corrected Foldy-Wouthuysen method is restricted by the condition of convergence of a series of relativistic corrections.

  19. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients

    PubMed Central

    Chen, Weitian; Sica, Christopher T.; Meyer, Craig H.

    2008-01-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method. PMID:18956462

  20. Knowledge about tooth avulsion and its management among dental assistants in Riyadh, Saudi Arabia

    PubMed Central

    2014-01-01

    Background Studies evaluating dental assistants’ knowledge about tooth avulsion and its management are rare. The purpose of this study was to evaluate the level of knowledge about tooth avulsion and its management among dental assistants in Riyadh, Saudi Arabia and to assess its relationship with their educational background. Methods A convenience sampling methodology was employed for sample selection. Over a period of four months starting in February, 2013, 691 pretested 17-item questionnaires were distributed. A total of 498 questionnaires were returned for an overall response rate of 72.1%. Six questions were related to knowledge about permanent tooth avulsion and one question was related to knowledge about primary tooth avulsion. Correct answers to these questions were assigned one point each, and based on this scoring system, an overall knowledge score was calculated. An analysis of covariance was used to test the association between the level of knowledge (total score) and the educational qualifications of the respondents (dental degree and others). A P-value of 0.05 was considered the threshold for statistical significance. Results The majority of the respondents (n = 387; 77.7%) were non-Saudis (377 were from the Philippines), and 79.1% (n = 306) of the Filipinos had a dental degree. The question about recommendations for an avulsed tooth that is dirty elicited the highest number of correct responses (n = 444; 89.2%), whereas the question about the best storage media elicited the lowest number of correct responses (n = 192; 38.6%). The overall mean score for knowledge about tooth avulsion was 6.27 ± 1.74. The mean knowledge score among the respondents with a dental degree was 6.63 ± 1.37, whereas that among the respondents with other qualifications was 5.71 ± 2.08. Conclusions The educational qualifications of the surveyed dental assistants were strongly correlated with the level of knowledge about tooth avulsion and its management. PMID:24885584

Top