SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Elder, E; Roper, J
2015-06-15
Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less
Chen, Weitian; Sica, Christopher T; Meyer, Craig H
2008-11-01
Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.
Preferred color correction for digital LCD TVs
NASA Astrophysics Data System (ADS)
Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.
A velocity-correction projection method based immersed boundary method for incompressible flows
NASA Astrophysics Data System (ADS)
Cai, Shanggui
2014-11-01
In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.
Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B
2018-06-01
To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Chen, Weitian; Sica, Christopher T.; Meyer, Craig H.
2008-01-01
Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method. PMID:18956462
An Automated Baseline Correction Method Based on Iterative Morphological Operations.
Chen, Yunliang; Dai, Liankui
2018-05-01
Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.
Stripe nonuniformity correction for infrared imaging system based on single image optimization
NASA Astrophysics Data System (ADS)
Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai
2018-06-01
Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.
Wu, Wenchuan; Fang, Sheng; Guo, Hua
2014-06-01
Aiming at motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging (MRI), we proposed a joint correction method in this paper to correct the two kinds of artifacts simultaneously without additional acquisition of navigation data and field map. We utilized the proposed method using multi-shot variable density spiral sequence to acquire MRI data and used auto-focusing technique for image deblurring. We also used direct method or iterative method to correct motion induced phase errors in the process of deblurring. In vivo MRI experiments demonstrated that the proposed method could effectively suppress motion artifacts and off-resonance artifacts and achieve images with fine structures. In addition, the scan time was not increased in applying the proposed method.
Staircase-scene-based nonuniformity correction in aerial point target detection systems.
Huo, Lijun; Zhou, Dabiao; Wang, Dejiang; Liu, Rang; He, Bin
2016-09-01
Focal-plane arrays (FPAs) are often interfered by heavy fixed-pattern noise, which severely degrades the detection rate and increases the false alarms in airborne point target detection systems. Thus, high-precision nonuniformity correction is an essential preprocessing step. In this paper, a new nonuniformity correction method is proposed based on a staircase scene. This correction method can compensate for the nonlinear response of the detector and calibrate the entire optical system with computational efficiency and implementation simplicity. Then, a proof-of-concept point target detection system is established with a long-wave Sofradir FPA. Finally, the local standard deviation of the corrected image and the signal-to-clutter ratio of the Airy disk of a Boeing B738 are measured to evaluate the performance of the proposed nonuniformity correction method. Our experimental results demonstrate that the proposed correction method achieves high-quality corrections.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
A novel method for correcting scanline-observational bias of discontinuity orientation
Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong
2016-01-01
Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249
NASA Astrophysics Data System (ADS)
Hai-Jung In,; Oh-Kyong Kwon,
2010-03-01
A simple pixel structure using a video data correction method is proposed to compensate for electrical characteristic variations of driving thin-film transistors (TFTs) and the degradation of organic light-emitting diodes (OLEDs) in active-matrix OLED (AMOLED) displays. The proposed method senses the electrical characteristic variations of TFTs and OLEDs and stores them in external memory. The nonuniform emission current of TFTs and the aging of OLEDs are corrected by modulating video data using the stored data. Experimental results show that the emission current error due to electrical characteristic variation of driving TFTs is in the range from -63.1 to 61.4% without compensation, but is decreased to the range from -1.9 to 1.9% with the proposed correction method. The luminance error due to the degradation of an OLED is less than 1.8% when the proposed correction method is used for a 50% degraded OLED.
Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook
2007-03-01
To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.
A study on scattering correction for γ-photon 3D imaging test method
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao
2018-03-01
A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Extraction of memory colors for preferred color correction in digital TVs
NASA Astrophysics Data System (ADS)
Ryu, Byong Tae; Yeom, Jee Young; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Subjective image quality is one of the most important performance indicators for digital TVs. In order to improve subjective image quality, preferred color correction is often employed. More specifically, areas of memory colors such as skin, grass, and sky are modified to generate pleasing impression to viewers. Before applying the preferred color correction, tendency of preference for memory colors should be identified. It is often accomplished by off-line human visual tests. Areas containing the memory colors should be extracted then color correction is applied to the extracted areas. These processes should be performed on-line. This paper presents a new method for area extraction of three types of memory colors. Performance of the proposed method is evaluated by calculating the correct and false detection ratios. Experimental results indicate that proposed method outperform previous methods proposed for the memory color extraction.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Peng; Hutton, Brian F.; Holstensson, Maria
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less
Highly efficient nonrigid motion‐corrected 3D whole‐heart coronary vessel wall imaging
Atkinson, David; Henningsson, Markus; Botnar, Rene M.; Prieto, Claudia
2016-01-01
Purpose To develop a respiratory motion correction framework to accelerate free‐breathing three‐dimensional (3D) whole‐heart coronary lumen and coronary vessel wall MRI. Methods We developed a 3D flow‐independent approach for vessel wall imaging based on the subtraction of data with and without T2‐preparation prepulses acquired interleaved with image navigators. The proposed method corrects both datasets to the same respiratory position using beat‐to‐beat translation and bin‐to‐bin nonrigid corrections, producing coregistered, motion‐corrected coronary lumen and coronary vessel wall images. The proposed method was studied in 10 healthy subjects and was compared with beat‐to‐beat translational correction (TC) and no motion correction for the left and right coronary arteries. Additionally, the coronary lumen images were compared with a 6‐mm diaphragmatic navigator gated and tracked scan. Results No significant differences (P > 0.01) were found between the proposed method and the gated and tracked scan for coronary lumen, despite an average improvement in scan efficiency to 96% from 59%. Significant differences (P < 0.01) were found in right coronary artery vessel wall thickness, right coronary artery vessel wall sharpness, and vessel wall visual score between the proposed method and TC. Conclusion The feasibility of a highly efficient motion correction framework for simultaneous whole‐heart coronary lumen and vessel wall has been demonstrated. Magn Reson Med 77:1894–1908, 2017. © 2016 International Society for Magnetic Resonance in Medicine PMID:27221073
NASA Astrophysics Data System (ADS)
Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.
2018-01-01
Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Real-Time Single Frequency Precise Point Positioning Using SBAS Corrections
Li, Liang; Jia, Chun; Zhao, Lin; Cheng, Jianhua; Liu, Jianxu; Ding, Jicheng
2016-01-01
Real-time single frequency precise point positioning (PPP) is a promising technique for high-precision navigation with sub-meter or even centimeter-level accuracy because of its convenience and low cost. The navigation performance of single frequency PPP heavily depends on the real-time availability and quality of correction products for satellite orbits and satellite clocks. Satellite-based augmentation system (SBAS) provides the correction products in real-time, but they are intended to be used for wide area differential positioning at 1 meter level precision. By imposing the constraints for ionosphere error, we have developed a real-time single frequency PPP method by sufficiently utilizing SBAS correction products. The proposed PPP method are tested with static and kinematic data, respectively. The static experimental results show that the position accuracy of the proposed PPP method can reach decimeter level, and achieve an improvement of at least 30% when compared with the traditional SBAS method. The positioning convergence of the proposed PPP method can be achieved in 636 epochs at most in static mode. In the kinematic experiment, the position accuracy of the proposed PPP method can be improved by at least 20 cm relative to the SBAS method. Furthermore, it has revealed that the proposed PPP method can achieve decimeter level convergence within 500 s in the kinematic mode. PMID:27517930
WE-AB-207A-07: A Planning CT-Guided Scatter Artifact Correction Method for CBCT Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Liu, T; Dong, X
Purpose: Cone beam computed tomography (CBCT) imaging is on increasing demand for high-performance image-guided radiotherapy such as online tumor delineation and dose calculation. However, the current CBCT imaging has severe scatter artifacts and its current clinical application is therefore limited to patient setup based mainly on the bony structures. This study’s purpose is to develop a CBCT artifact correction method. Methods: The proposed scatter correction method utilizes the planning CT to improve CBCT image quality. First, an image registration is used to match the planning CT with the CBCT to reduce the geometry difference between the two images. Then, themore » planning CT-based prior information is entered into the Bayesian deconvolution framework to iteratively perform a scatter artifact correction for the CBCT mages. This technique was evaluated using Catphan phantoms with multiple inserts. Contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial nonuniformity (ISN) in selected volume of interests (VOIs) were calculated to assess the proposed correction method. Results: Post scatter correction, the CNR increased by a factor of 1.96, 3.22, 3.20, 3.46, 3.44, 1.97 and 1.65, and the SNR increased by a factor 1.05, 2.09, 1.71, 3.95, 2.52, 1.54 and 1.84 for the Air, PMP, LDPE, Polystryrene, Acrylic, Delrin and Teflon inserts, respectively. The ISN decreased from 21.1% to 4.7% in the corrected images. All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed method reduces the relevant artifacts and recovers CT numbers. Conclusion: We have developed a novel CBCT artifact correction method based on CT image, and demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.« less
Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin
2012-06-01
Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.
Kim, Yoon-Chul; Nielsen, Jon-Fredrik; Nayak, Krishna S
2008-01-01
To develop a method that automatically corrects ghosting artifacts due to echo-misalignment in interleaved gradient-echo echo-planar imaging (EPI) in arbitrary oblique or double-oblique scan planes. An automatic ghosting correction technique was developed based on an alternating EPI acquisition and the phased-array ghost elimination (PAGE) reconstruction method. The direction of k-space traversal is alternated at every temporal frame, enabling lower temporal-resolution ghost-free coil sensitivity maps to be dynamically estimated. The proposed method was compared with conventional one-dimensional (1D) phase correction in axial, oblique, and double-oblique scan planes in phantom and cardiac in vivo studies. The proposed method was also used in conjunction with two-fold acceleration. The proposed method with nonaccelerated acquisition provided excellent suppression of ghosting artifacts in all scan planes, and was substantially more effective than conventional 1D phase correction in oblique and double-oblique scan planes. The feasibility of real-time reconstruction using the proposed technique was demonstrated in a scan protocol with 3.1-mm spatial and 60-msec temporal resolution. The proposed technique with nonaccelerated acquisition provides excellent ghost suppression in arbitrary scan orientations without a calibration scan, and can be useful for real-time interactive imaging, in which scan planes are frequently changed with arbitrary oblique orientations.
Scene-based nonuniformity correction and enhancement: pixel statistics and subpixel motion.
Zhao, Wenyi; Zhang, Chao
2008-07-01
We propose a framework for scene-based nonuniformity correction (NUC) and nonuniformity correction and enhancement (NUCE) that is required for focal-plane array-like sensors to obtain clean and enhanced-quality images. The core of the proposed framework is a novel registration-based nonuniformity correction super-resolution (NUCSR) method that is bootstrapped by statistical scene-based NUC methods. Based on a comprehensive imaging model and an accurate parametric motion estimation, we are able to remove severe/structured nonuniformity and in the presence of subpixel motion to simultaneously improve image resolution. One important feature of our NUCSR method is the adoption of a parametric motion model that allows us to (1) handle many practical scenarios where parametric motions are present and (2) carry out perfect super-resolution in principle by exploring available subpixel motions. Experiments with real data demonstrate the efficiency of the proposed NUCE framework and the effectiveness of the NUCSR method.
Single image non-uniformity correction using compressive sensing
NASA Astrophysics Data System (ADS)
Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu
2016-05-01
A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.
NASA Astrophysics Data System (ADS)
Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing
2017-11-01
Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.
A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors
NASA Astrophysics Data System (ADS)
Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian
2017-09-01
A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.
Bandwidth correction for LED chromaticity based on Levenberg-Marquardt algorithm
NASA Astrophysics Data System (ADS)
Huang, Chan; Jin, Shiqun; Xia, Guo
2017-10-01
Light emitting diode (LED) is widely employed in industrial applications and scientific researches. With a spectrometer, the chromaticity of LED can be measured. However, chromaticity shift will occur due to the broadening effects of the spectrometer. In this paper, an approach is put forward to bandwidth correction for LED chromaticity based on Levenberg-Marquardt algorithm. We compare chromaticity of simulated LED spectra by using the proposed method and differential operator method to bandwidth correction. The experimental results show that the proposed approach achieves an excellent performance in bandwidth correction which proves the effectiveness of the approach. The method has also been tested on true blue LED spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction
Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo
2018-01-01
The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857
An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.
Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo
2018-01-13
The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.
NASA Astrophysics Data System (ADS)
Yang, Kai; Burkett, George, Jr.; Boone, John M.
2014-11-01
The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.
NASA Astrophysics Data System (ADS)
Hwang, Ui-Jung; Shin, Dongho; Lee, Se Byeong; Lim, Young Kyung; Jeong, Jong Hwi; Kim, Hak Soo; Kim, Ki Hwan
2018-05-01
To apply a scintillating fiber dosimetry system to measure the range of a proton therapy beam, a new method was proposed to correct for the quenching effect on measuring an spread out Bragg peak (SOBP) proton beam whose range is modulated by a range modulator wheel. The scintillating fiber dosimetry system was composed of a plastic scintillating fiber (BCF-12), optical fiber (SH 2001), photo multiplier tube (H7546), and data acquisition system (PXI6221 and SCC68). The proton beam was generated by a cyclotron (Proteus-235) in the National Cancer Center in Korea. It operated in the double-scattering mode and the spread out of the Bragg peak was achieved by a spinning range modulation wheel. Bragg peak beams and SOBP beams of various ranges were measured, corrected, and compared to the ion chamber data. For the Bragg peak beam, quenching equation was used to correct the quenching effect. On the proposed process of correcting SOBP beams, the measured data using a scintillating fiber were separated by the Bragg peaks that the SOBP beam contained, and then recomposed again to reconstruct an SOBP after correcting for each Bragg peak. The measured depth-dose curve for the single Bragg peak beam was well corrected by using a simple quenching equation. Correction for SOBP beam was conducted with a newly proposed method. The corrected SOBP signal was in accordance with the results measured with an ion chamber. We propose a new method to correct for the SOBP beam from the quenching effect in a scintillating fiber dosimetry system. This method can be applied to other scintillator dosimetry for radiation beams in which the quenching effect is shown in the scintillator.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Improved calibration-based non-uniformity correction method for uncooled infrared camera
NASA Astrophysics Data System (ADS)
Liu, Chengwei; Sui, Xiubao
2017-08-01
With the latest improvements of microbolometer focal plane arrays (FPA), uncooled infrared (IR) cameras are becoming the most widely used devices in thermography, especially in handheld devices. However the influences derived from changing ambient condition and the non-uniform response of the sensors make it more difficult to correct the nonuniformity of uncooled infrared camera. In this paper, based on the infrared radiation characteristic in the TEC-less uncooled infrared camera, a novel model was proposed for calibration-based non-uniformity correction (NUC). In this model, we introduce the FPA temperature, together with the responses of microbolometer under different ambient temperature to calculate the correction parameters. Based on the proposed model, we can work out the correction parameters with the calibration measurements under controlled ambient condition and uniform blackbody. All correction parameters can be determined after the calibration process and then be used to correct the non-uniformity of the infrared camera in real time. This paper presents the detail of the compensation procedure and the performance of the proposed calibration-based non-uniformity correction method. And our method was evaluated on realistic IR images obtained by a 384x288 pixels uncooled long wave infrared (LWIR) camera operated under changed ambient condition. The results show that our method can exclude the influence caused by the changed ambient condition, and ensure that the infrared camera has a stable performance.
In, Myung-Ho; Posnansky, Oleg; Speck, Oliver
2016-05-01
To accurately correct diffusion-encoding direction-dependent eddy-current-induced geometric distortions in diffusion-weighted echo-planar imaging (DW-EPI) and to minimize the calibration time at 7 Tesla (T). A point spread function (PSF) mapping based eddy-current calibration method is newly presented to determine eddy-current-induced geometric distortions even including nonlinear eddy-current effects within the readout acquisition window. To evaluate the temporal stability of eddy-current maps, calibration was performed four times within 3 months. Furthermore, spatial variations of measured eddy-current maps versus their linear superposition were investigated to enable correction in DW-EPIs with arbitrary diffusion directions without direct calibration. For comparison, an image-based eddy-current correction method was additionally applied. Finally, this method was combined with a PSF-based susceptibility-induced distortion correction approach proposed previously to correct both susceptibility and eddy-current-induced distortions in DW-EPIs. Very fast eddy-current calibration in a three-dimensional volume is possible with the proposed method. The measured eddy-current maps are very stable over time and very similar maps can be obtained by linear superposition of principal-axes eddy-current maps. High resolution in vivo brain results demonstrate that the proposed method allows more efficient eddy-current correction than the image-based method. The combination of both PSF-based approaches allows distortion-free images, which permit reliable analysis in diffusion tensor imaging applications at 7T. © 2015 Wiley Periodicals, Inc.
A Horizontal Tilt Correction Method for Ship License Numbers Recognition
NASA Astrophysics Data System (ADS)
Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi
2018-02-01
An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.
Van, Anh T; Hernando, Diego; Sutton, Bradley P
2011-11-01
A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.
Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia
2016-06-01
To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.
Simple automatic strategy for background drift correction in chromatographic data analysis.
Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin
2016-06-03
Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.
Han, Buhm; Kang, Hyun Min; Eskin, Eleazar
2009-01-01
With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255
NASA Astrophysics Data System (ADS)
Li, Zhaokun; Zhao, Xiaohui
2017-02-01
The sensor-less adaptive optics (AO) is one of the most promising methods to compensate strong wave front disturbance in free space optics communication (FSO). The back propagation (BP) artificial neural network is applied for the sensor-less AO system to design a distortion correction scheme in this study. This method only needs one or a few online measurements to correct the wave front distortion compared with other model-based approaches, by which the real-time capacity of the system is enhanced and the Strehl Ratio (SR) is largely improved. Necessary comparisons in numerical simulation with other model-based and model-free correction methods proposed in Refs. [6,8,9,10] are given to show the validity and advantage of the proposed method.
NASA Astrophysics Data System (ADS)
Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki
2016-12-01
We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.
[Lateral chromatic aberrations correction for AOTF imaging spectrometer based on doublet prism].
Zhao, Hui-Jie; Zhou, Peng-Wei; Zhang, Ying; Li, Chong-Chong
2013-10-01
An user defined surface function method was proposed to model the acousto-optic interaction of AOTF based on wave-vector match principle. Assessment experiment result shows that this model can achieve accurate ray trace of AOTF diffracted beam. In addition, AOTF imaging spectrometer presents large residual lateral color when traditional chromatic aberrations correcting method is adopted. In order to reduce lateral chromatic aberrations, a method based on doublet prism is proposed. The optical material and angle of the prism are optimized automatically using global optimization with the help of user defined AOTF surface. Simulation result shows that the proposed method provides AOTF imaging spectrometer with great conveniences, which reduces the lateral chromatic aberration to less than 0.000 3 degrees and improves by one order of magnitude, with spectral image shift effectively corrected.
Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Tian, Xin; Pan, Le-chun
2014-07-01
Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.
Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altube, Patricia; Bech, Joan; Argemí, Oriol
In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less
Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing
Altube, Patricia; Bech, Joan; Argemí, Oriol; ...
2017-07-18
In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less
High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system
NASA Astrophysics Data System (ADS)
Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng
2017-09-01
Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.
A vibration correction method for free-fall absolute gravimeters
NASA Astrophysics Data System (ADS)
Qian, J.; Wang, G.; Wu, K.; Wang, L. J.
2018-02-01
An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.
A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.
Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin
2015-09-16
In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Iterative CT shading correction with no prior information
NASA Astrophysics Data System (ADS)
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical and attractive as a general solution to CT shading correction.
Oh, Se-Hong; Chung, Jun-Young; In, Myung-Ho; Zaitsev, Maxim; Kim, Young-Bo; Speck, Oliver; Cho, Zang-Hee
2012-10-01
Despite its wide use, echo-planar imaging (EPI) suffers from geometric distortions due to off-resonance effects, i.e., strong magnetic field inhomogeneity and susceptibility. This article reports a novel method for correcting the distortions observed in EPI acquired at ultra-high-field such as 7 T. Point spread function (PSF) mapping methods have been proposed for correcting the distortions in EPI. The PSF shift map can be derived either along the nondistorted or the distorted coordinates. Along the nondistorted coordinates more information about compressed areas is present but it is prone to PSF-ghosting artifacts induced by large k-space shift in PSF encoding direction. In contrast, shift maps along the distorted coordinates contain more information in stretched areas and are more robust against PSF-ghosting. In ultra-high-field MRI, an EPI contains both compressed and stretched regions depending on the B0 field inhomogeneity and local susceptibility. In this study, we present a new geometric distortion correction scheme, which selectively applies the shift map with more information content. We propose a PSF-ghost elimination method to generate an artifact-free pixel shift map along nondistorted coordinates. The proposed method can correct the effects of the local magnetic field inhomogeneity induced by the susceptibility effects along with the PSF-ghost artifact cancellation. We have experimentally demonstrated the advantages of the proposed method in EPI data acquisitions in phantom and human brain using 7-T MRI. Copyright © 2011 Wiley Periodicals, Inc.
Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu
2012-05-01
Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
A maintenance time prediction method considering ergonomics through virtual reality simulation.
Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan
2016-01-01
Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.
Zhang, Geng; Wang, Shuang; Li, Libo; Hu, Xiuqing; Hu, Bingliang
2016-11-01
The lunar spectrum has been used in radiometric calibration and sensor stability monitoring for spaceborne optical sensors. A ground-based large-aperture static image spectrometer (LASIS) can be used to acquire the lunar spectral image for lunar radiance model improvement when the moon orbits over its viewing field. The lunar orbiting behavior is not consistent with the desired scanning speed and direction of LASIS. To correctly extract interferograms from the obtained data, a translation correction method based on image correlation is proposed. This method registers the frames to a reference frame to reduce accumulative errors. Furthermore, we propose a circle-matching-based approach to achieve even higher accuracy during observation of the full moon. To demonstrate the effectiveness of our approaches, experiments are run on true lunar observation data. The results show that the proposed approaches outperform the state-of-the-art methods.
A multilevel correction adaptive finite element method for Kohn-Sham equation
NASA Astrophysics Data System (ADS)
Hu, Guanghui; Xie, Hehu; Xu, Fei
2018-02-01
In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.
NASA Astrophysics Data System (ADS)
Silenko, Alexander J.
2016-02-01
General properties of the Foldy-Wouthuysen transformation which is widely used in quantum mechanics and quantum chemistry are considered. Merits and demerits of the original Foldy-Wouthuysen transformation method are analyzed. While this method does not satisfy the Eriksen condition of the Foldy-Wouthuysen transformation, it can be corrected with the use of the Baker-Campbell-Hausdorff formula. We show a possibility of such a correction and propose an appropriate algorithm of calculations. An applicability of the corrected Foldy-Wouthuysen method is restricted by the condition of convergence of a series of relativistic corrections.
Proposed Revisions to Method 202
EPA is proposing the following revisions to Method 202: Revisions to the procedures for determining the systematic error of the method, which is used to correct the results of the measurements made using this method; Removes some procedural options to
The combination of the error correction methods of GAFCHROMIC EBT3 film
Li, Yinghui; Chen, Lixin; Zhu, Jinhan; Liu, Xiaowei
2017-01-01
Purpose The aim of this study was to combine a set of methods for use of radiochromic film dosimetry, including calibration, correction for lateral effects and a proposed triple-channel analysis. These methods can be applied to GAFCHROMIC EBT3 film dosimetry for radiation field analysis and verification of IMRT plans. Methods A single-film exposure was used to achieve dose calibration, and the accuracy was verified based on comparisons with the square-field calibration method. Before performing the dose analysis, the lateral effects on pixel values were corrected. The position dependence of the lateral effect was fitted by a parabolic function, and the curvature factors of different dose levels were obtained using a quadratic formula. After lateral effect correction, a triple-channel analysis was used to reduce disturbances and convert scanned images from films into dose maps. The dose profiles of open fields were measured using EBT3 films and compared with the data obtained using an ionization chamber. Eighteen IMRT plans with different field sizes were measured and verified with EBT3 films, applying our methods, and compared to TPS dose maps, to check correct implementation of film dosimetry proposed here. Results The uncertainty of lateral effects can be reduced to ±1 cGy. Compared with the results of Micke A et al., the residual disturbances of the proposed triple-channel method at 48, 176 and 415 cGy are 5.3%, 20.9% and 31.4% smaller, respectively. Compared with the ionization chamber results, the difference in the off-axis ratio and percentage depth dose are within 1% and 2%, respectively. For the application of IMRT verification, there were no difference between two triple-channel methods. Compared with only corrected by triple-channel method, the IMRT results of the combined method (include lateral effect correction and our present triple-channel method) show a 2% improvement for large IMRT fields with the criteria 3%/3 mm. PMID:28750023
Image-based spectral distortion correction for photon-counting x-ray detectors
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608
Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.
Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying
2016-03-21
Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.
NASA Astrophysics Data System (ADS)
Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen
2014-08-01
Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.
Robust distortion correction of endoscope
NASA Astrophysics Data System (ADS)
Li, Wenjing; Nie, Sixiang; Soto-Thompson, Marcelo; Chen, Chao-I.; A-Rahim, Yousif I.
2008-03-01
Endoscopic images suffer from a fundamental spatial distortion due to the wide angle design of the endoscope lens. This barrel-type distortion is an obstacle for subsequent Computer Aided Diagnosis (CAD) algorithms and should be corrected. Various methods and research models for the barrel-type distortion correction have been proposed and studied. For industrial applications, a stable, robust method with high accuracy is required to calibrate the different types of endoscopes in an easy of use way. The correction area shall be large enough to cover all the regions that the physicians need to see. In this paper, we present our endoscope distortion correction procedure which includes data acquisition, distortion center estimation, distortion coefficients calculation, and look-up table (LUT) generation. We investigate different polynomial models used for modeling the distortion and propose a new one which provides correction results with better visual quality. The method has been verified with four types of colonoscopes. The correction procedure is currently being applied on human subject data and the coefficients are being utilized in a subsequent 3D reconstruction project of colon.
Impact of Self-Correction on Extrovert and Introvert Students in EFL Writing Progress
ERIC Educational Resources Information Center
Hajimohammadi, Reza; Makundan, Jayakaran
2011-01-01
To investigate the impact of self-correction method as an alternative to the traditional teacher-correction method, on the one side, and to evaluate the impact of personality traits of Extroversion/Introversion, on the other side, on the writing progress of the pre-intermediate learners three null-hypotheses were proposed. In spite of students…
SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Y; Wu, P; Mao, T
2016-06-15
Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
Image Reconstruction for a Partially Collimated Whole Body PET Scanner
Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.
2008-01-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731
Image Reconstruction for a Partially Collimated Whole Body PET Scanner.
Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E
2008-06-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.
Correction of the heat loss method for calculating clothing real evaporative resistance.
Wang, Faming; Zhang, Chengjiao; Lu, Yehu
2015-08-01
In the so-called isothermal condition (i.e., Tair [air temperature]=Tmanikin [manikin temperature]=Tr [radiant temperature]), the actual energy used for moisture evaporation detected by most sweating manikins was underestimated due to the uncontrolled fabric 'skin' temperature Tsk,f (i.e., Tsk,f
Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki
2015-08-01
A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.
Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai
2016-04-01
We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sun, Weifang; Yao, Bin; He, Yuchao; Chen, Binqiang; Zeng, Nianyin; He, Wangpeng
2017-08-09
Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages.
NASA Astrophysics Data System (ADS)
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
Optimization-based scatter estimation using primary modulation for computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less
Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.
Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki
2018-03-01
To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .
Liu, L; Kan, A; Leckie, C; Hodgkin, P D
2017-04-01
Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Chen, Siyu; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin
2016-10-01
X-ray computed tomography (CT) has been extensively applied in industrial non-destructive testing (NDT). However, in practical applications, the X-ray beam polychromaticity often results in beam hardening problems for image reconstruction. The beam hardening artifacts, which manifested as cupping, streaks and flares, not only debase the image quality, but also disturb the subsequent analyses. Unfortunately, conventional CT scanning requires that the scanned object is completely covered by the field of view (FOV), the state-of-art beam hardening correction methods only consider the ideal scanning configuration, and often suffer problems for interior tomography due to the projection truncation. Aiming at this problem, this paper proposed a beam hardening correction method based on radon inversion transform for interior tomography. Experimental results show that, compared to the conventional correction algorithms, the proposed approach has achieved excellent performance in both beam hardening artifacts reduction and truncation artifacts suppression. Therefore, the presented method has vitally theoretic and practicable meaning in artifacts correction of industrial CT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manualmore » ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.« less
Image steganography based on 2k correction and coherent bit length
NASA Astrophysics Data System (ADS)
Sun, Shuliang; Guo, Yongning
2014-10-01
In this paper, a novel algorithm is proposed. Firstly, the edge of cover image is detected with Canny operator and secret data is embedded in edge pixels. Sorting method is used to randomize the edge pixels in order to enhance security. Coherent bit length L is determined by relevant edge pixels. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is better than LSB-3 and Jae-Gil Yu's in PSNR and capacity.
An improved non-uniformity correction algorithm and its GPU parallel implementation
NASA Astrophysics Data System (ADS)
Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui
2018-05-01
The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin
2015-01-01
Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.
An algorithm to track laboratory zebrafish shoals.
Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia
2018-05-01
In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Scatter measurement and correction method for cone-beam CT based on single grating scan
NASA Astrophysics Data System (ADS)
Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua
2017-06-01
In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.
NASA Astrophysics Data System (ADS)
Zhang, Ji; Li, Tao; Zheng, Shiqiang; Li, Yiyong
2015-03-01
To reduce the effects of respiratory motion in the quantitative analysis based on liver contrast-enhanced ultrasound (CEUS) image sequencesof single mode. The image gating method and the iterative registration method using model image were adopted to register liver contrast-enhanced ultrasound image sequences of single mode. The feasibility of the proposed respiratory motion correction method was explored preliminarily using 10 hepatocellular carcinomas CEUS cases. The positions of the lesions in the time series of 2D ultrasound images after correction were visually evaluated. Before and after correction, the quality of the weighted sum of transit time (WSTT) parametric images were also compared, in terms of the accuracy and spatial resolution. For the corrected and uncorrected sequences, their mean deviation values (mDVs) of time-intensity curve (TIC) fitting derived from CEUS sequences were measured. After the correction, the positions of the lesions in the time series of 2D ultrasound images were almost invariant. In contrast, the lesions in the uncorrected images all shifted noticeably. The quality of the WSTT parametric maps derived from liver CEUS image sequences were improved more greatly. Moreover, the mDVs of TIC fitting derived from CEUS sequences after the correction decreased by an average of 48.48+/-42.15. The proposed correction method could improve the accuracy of quantitative analysis based on liver CEUS image sequences of single mode, which would help in enhancing the differential diagnosis efficiency of liver tumors.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Trainable multiscript orientation detection
NASA Astrophysics Data System (ADS)
Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.
2010-01-01
Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.
Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun
2011-07-07
In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.
NASA Astrophysics Data System (ADS)
Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming
2016-12-01
The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.
Improved remote gaze estimation using corneal reflection-adaptive geometric transforms
NASA Astrophysics Data System (ADS)
Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea
2014-05-01
Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.
Xu, Dan; Maier, Joseph K; King, Kevin F; Collick, Bruce D; Wu, Gaohong; Peters, Robert D; Hinks, R Scott
2013-11-01
The proposed method is aimed at reducing eddy current (EC) induced distortion in diffusion weighted echo planar imaging, without the need to perform further image coregistration between diffusion weighted and T2 images. These ECs typically have significant high order spatial components that cannot be compensated by preemphasis. High order ECs are first calibrated at the system level in a protocol independent fashion. The resulting amplitudes and time constants of high order ECs can then be used to calculate imaging protocol specific corrections. A combined prospective and retrospective approach is proposed to apply correction during data acquisition and image reconstruction. Various phantom, brain, body, and whole body diffusion weighted images with and without the proposed method are acquired. Significantly reduced image distortion and misregistration are consistently seen in images with the proposed method compared with images without. The proposed method is a powerful (e.g., effective at 48 cm field of view and 30 cm slice coverage) and flexible (e.g., compatible with other image enhancements and arbitrary scan plane) technique to correct high order ECs induced distortion and misregistration for various diffusion weighted echo planar imaging applications, without the need for further image post processing, protocol dependent prescan, or sacrifice in signal-to-noise ratio. Copyright © 2013 Wiley Periodicals, Inc.
Prior-based artifact correction (PBAC) in computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heußer, Thorsten, E-mail: thorsten.heusser@dkfz-heidelberg.de; Brehm, Marcus; Ritschl, Ludwig
2014-02-15
Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form ofmore » a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data.« less
Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array.
Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun
2016-11-10
In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above.
Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array
Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun
2016-01-01
In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above. PMID:27834893
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less
Silva-Rodríguez, Jesús; Aguiar, Pablo; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Alvaro
2014-05-01
Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
Sun, Weifang; Yao, Bin; He, Yuchao; Zeng, Nianyin; He, Wangpeng
2017-01-01
Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages. PMID:28792453
WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Niu, T
2015-06-15
Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Image-guided regularization level set evolution for MR image segmentation and bias field correction.
Wang, Lingfeng; Pan, Chunhong
2014-01-01
Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Distortion Correction of OCT Images of the Crystalline Lens: GRIN Approach
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-01-01
Purpose To propose a method to correct Optical Coherence Tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Methods 2-D images of 9 human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared to the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley and lens thickness shifts from the nominal data. Results Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface, in terms of RMS and peak values, with errors less than 6μm and 13μm respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8μm. Conclusions The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in 2-D, it is expected that 3-D imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations. PMID:22466105
Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin
2008-08-20
An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.
Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh
2014-01-01
Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.
An Argument Against Augmenting the Lagrangean for Nonholonomic Systems
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Hodges, Dewey H.
2009-01-01
Although it is known that correct dynamical equations of motion for a nonholonomic system cannot be obtained from a Lagrangean that has been augmented with a sum of the nonholonomic constraint equations weighted with multipliers, previous publications suggest otherwise. An example has been proposed in support of augmentation and purportedly demonstrates that an accepted method fails to produce correct equations of motion whereas augmentation leads to correct equations; this paper shows that in fact the opposite is true. The correct equations, previously discounted on the basis of a flawed application of the Newton-Euler method, are verified by using Kane's method and a new approach to determining the directions of constraint forces. A correct application of the Newton-Euler method reproduces valid equations.
Angular spectral framework to test full corrections of paraxial solutions.
Mahillo-Isla, R; González-Morales, M J
2015-07-01
Different correction methods for paraxial solutions have been used when such solutions extend out of the paraxial regime. The authors have used correction methods guided by either their experience or some educated hypothesis pertinent to the particular problem that they were tackling. This article provides a framework so as to classify full wave correction schemes. Thus, for a given solution of the paraxial wave equation, we can select the best correction scheme of those available. Some common correction methods are considered and evaluated under the proposed scope. Another remarkable contribution is obtained by giving the necessary conditions that two solutions of the Helmholtz equation must accomplish to accept a common solution of the parabolic wave equation as a paraxial approximation of both solutions.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
Weighted divergence correction scheme and its fast implementation
NASA Astrophysics Data System (ADS)
Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun
2017-05-01
Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.
Scene-based nonuniformity corrections for optical and SWIR pushbroom sensors.
Leathers, Robert; Downes, Trijntje; Priest, Richard
2005-06-27
We propose and evaluate several scene-based methods for computing nonuniformity corrections for visible or near-infrared pushbroom sensors. These methods can be used to compute new nonuniformity correction values or to repair or refine existing radiometric calibrations. For a given data set, the preferred method depends on the quality of the data, the type of scenes being imaged, and the existence and quality of a laboratory calibration. We demonstrate our methods with data from several different sensor systems and provide a generalized approach to be taken for any new data set.
Banić, Nikola; Lončarić, Sven
2015-11-01
Removing the influence of illumination on image colors and adjusting the brightness across the scene are important image enhancement problems. This is achieved by applying adequate color constancy and brightness adjustment methods. One of the earliest models to deal with both of these problems was the Retinex theory. Some of the Retinex implementations tend to give high-quality results by performing local operations, but they are computationally relatively slow. One of the recent Retinex implementations is light random sprays Retinex (LRSR). In this paper, a new method is proposed for brightness adjustment and color correction that overcomes the main disadvantages of LRSR. There are three main contributions of this paper. First, a concept of memory sprays is proposed to reduce the number of LRSR's per-pixel operations to a constant regardless of the parameter values, thereby enabling a fast Retinex-based local image enhancement. Second, an effective remapping of image intensities is proposed that results in significantly higher quality. Third, the problem of LRSR's halo effect is significantly reduced by using an alternative illumination processing method. The proposed method enables a fast Retinex-based image enhancement by processing Retinex paths in a constant number of steps regardless of the path size. Due to the halo effect removal and remapping of the resulting intensities, the method outperforms many of the well-known image enhancement methods in terms of resulting image quality. The results are presented and discussed. It is shown that the proposed method outperforms most of the tested methods in terms of image brightness adjustment, color correction, and computational speed.
Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.
Kervrann, C; Legland, D; Pardini, L
2004-06-01
Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.
Adaptive correction procedure for TVL1 image deblurring under impulse noise
NASA Astrophysics Data System (ADS)
Bai, Minru; Zhang, Xiongjun; Shao, Qianqian
2016-08-01
For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.
NASA Astrophysics Data System (ADS)
Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten
2016-08-01
A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images.
Zhang, Jiarui; Zhang, Yingjie; Chen, Bo
2017-12-20
The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.
Distortion correction of OCT images of the crystalline lens: gradient index approach.
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-05-01
To propose a method to correct optical coherence tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Two-dimensional images of nine human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared with the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley, and lens thickness shifts from the nominal data. Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface in terms of root mean square and peak values, with errors <6 and 13 μm, respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8 μm. The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in two dimension, it is expected that three-dimensional imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations.
Orżanowski, Tomasz
2016-01-01
This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less
Zhou, Yongxin; Bai, Jing
2007-01-01
A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.
NASA Astrophysics Data System (ADS)
Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun
2017-10-01
Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebato, Yuki; Miyata, Tatsuhiko, E-mail: miyata.tatsuhiko.mf@ehime-u.ac.jp
Ornstein-Zernike (OZ) integral equation theory is known to overestimate the excess internal energy, U{sup ex}, pressure through the virial route, P{sub v}, and excess chemical potential, μ{sup ex}, for one-component Lennard-Jones (LJ) fluids under hypernetted chain (HNC) and Kovalenko-Hirata (KH) approximatons. As one of the bridge correction methods to improve the precision of these thermodynamic quantities, it was shown in our previous paper that the method to apparently adjust σ parameter in the LJ potential is effective [T. Miyata and Y. Ebato, J. Molec. Liquids. 217, 75 (2016)]. In our previous paper, we evaluated the actual variation in the σmore » parameter by using a fitting procedure to molecular dynamics (MD) results. In this article, we propose an alternative method to determine the actual variation in the σ parameter. The proposed method utilizes a condition that the virial and compressibility pressures coincide with each other. This method can correct OZ theory without a fitting procedure to MD results, and possesses characteristics of keeping a form of HNC and/or KH closure. We calculate the radial distribution function, pressure, excess internal energy, and excess chemical potential for one-component LJ fluids to check the performance of our proposed bridge function. We discuss the precision of these thermodynamic quantities by comparing with MD results. In addition, we also calculate a corrected gas-liquid coexistence curve based on a corrected KH-type closure and compare it with MD results.« less
An alternative empirical likelihood method in missing response problems and causal inference.
Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao
2016-11-30
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Robust finger vein ROI localization based on flexible segmentation.
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-10-24
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.
Robust Finger Vein ROI Localization Based on Flexible Segmentation
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-01-01
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
Slant correction for handwritten English documents
NASA Astrophysics Data System (ADS)
Shridhar, Malayappan; Kimura, Fumitaka; Ding, Yimei; Miller, John W. V.
2004-12-01
Optical character recognition of machine-printed documents is an effective means for extracting textural material. While the level of effectiveness for handwritten documents is much poorer, progress is being made in more constrained applications such as personal checks and postal addresses. In these applications a series of steps is performed for recognition beginning with removal of skew and slant. Slant is a characteristic unique to the writer and varies from writer to writer in which characters are tilted some amount from vertical. The second attribute is the skew that arises from the inability of the writer to write on a horizontal line. Several methods have been proposed and discussed for average slant estimation and correction in the earlier papers. However, analysis of many handwritten documents reveals that slant is a local property and slant varies even within a word. The use of an average slant for the entire word often results in overestimation or underestimation of the local slant. This paper describes three methods for local slant estimation, namely the simple iterative method, high-speed iterative method, and the 8-directional chain code method. The experimental results show that the proposed methods can estimate and correct local slant more effectively than the average slant correction.
Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.
2013-01-01
Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098
Refraction traveltime tomography based on damped wave equation for irregular topographic model
NASA Astrophysics Data System (ADS)
Park, Yunhui; Pyun, Sukjoon
2018-03-01
Land seismic data generally have time-static issues due to irregular topography and weathered layers at shallow depths. Unless the time static is handled appropriately, interpretation of the subsurface structures can be easily distorted. Therefore, static corrections are commonly applied to land seismic data. The near-surface velocity, which is required for static corrections, can be inferred from first-arrival traveltime tomography, which must consider the irregular topography, as the land seismic data are generally obtained in irregular topography. This paper proposes a refraction traveltime tomography technique that is applicable to an irregular topographic model. This technique uses unstructured meshes to express an irregular topography, and traveltimes calculated from the frequency-domain damped wavefields using the finite element method. The diagonal elements of the approximate Hessian matrix were adopted for preconditioning, and the principle of reciprocity was introduced to efficiently calculate the Fréchet derivative. We also included regularization to resolve the ill-posed inverse problem, and used the nonlinear conjugate gradient method to solve the inverse problem. As the damped wavefields were used, there were no issues associated with artificial reflections caused by unstructured meshes. In addition, the shadow zone problem could be circumvented because this method is based on the exact wave equation, which does not require a high-frequency assumption. Furthermore, the proposed method was both robust to an initial velocity model and efficient compared to full wavefield inversions. Through synthetic and field data examples, our method was shown to successfully reconstruct shallow velocity structures. To verify our method, static corrections were roughly applied to the field data using the estimated near-surface velocity. By comparing common shot gathers and stack sections with and without static corrections, we confirmed that the proposed tomography algorithm can be used to correct the statics of land seismic data.
77 FR 1129 - Revisions to Test Methods and Testing Regulations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-09
...This action proposes editorial and technical corrections necessary for source testing of emissions and operations. The revisions include the addition of alternative equipment and methods as well as corrections to technical and typographical errors. We also solicit public comment on potential changes to the current procedures for determining emission stratification.
Magnetic field shift due to mechanical vibration in functional magnetic resonance imaging.
Foerster, Bernd U; Tomasi, Dardo; Caparelli, Elisabeth C
2005-11-01
Mechanical vibrations of the gradient coil system during readout in echo-planar imaging (EPI) can increase the temperature of the gradient system and alter the magnetic field distribution during functional magnetic resonance imaging (fMRI). This effect is enhanced by resonant modes of vibrations and results in apparent motion along the phase encoding direction in fMRI studies. The magnetic field drift was quantified during EPI by monitoring the resonance frequency interleaved with the EPI acquisition, and a novel method is proposed to correct the apparent motion. The knowledge on the frequency drift over time was used to correct the phase of the k-space EPI dataset. Since the resonance frequency changes very slowly over time, two measurements of the resonance frequency, immediately before and after the EPI acquisition, are sufficient to remove the field drift effects from fMRI time series. The frequency drift correction method was tested "in vivo" and compared to the standard image realignment method. The proposed method efficiently corrects spurious motion due to magnetic field drifts during fMRI. (c) 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Gao, Lingli; Pan, Yudi
2018-05-01
The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.
NASA Astrophysics Data System (ADS)
Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le
2018-01-01
The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.
Automatic building extraction from LiDAR data fusion of point and grid-based features
NASA Astrophysics Data System (ADS)
Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang
2017-08-01
This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.
Numerical correction of distorted images in full-field optical coherence tomography
NASA Astrophysics Data System (ADS)
Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha
2012-03-01
We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.
A new approach for beam hardening correction based on the local spectrum distributions
NASA Astrophysics Data System (ADS)
Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza
2015-09-01
Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Correction of aeroheating-induced intensity nonuniformity in infrared images
NASA Astrophysics Data System (ADS)
Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu
2016-05-01
Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.
Online phase measuring profilometry for rectilinear moving object by image correction
NASA Astrophysics Data System (ADS)
Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin
2015-11-01
In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.
Intensity non-uniformity correction in MRI: existing methods and their validation.
Belaroussi, Boubakeur; Milles, Julien; Carme, Sabin; Zhu, Yue Min; Benoit-Cattin, Hugues
2006-04-01
Magnetic resonance imaging is a popular and powerful non-invasive imaging technique. Automated analysis has become mandatory to efficiently cope with the large amount of data generated using this modality. However, several artifacts, such as intensity non-uniformity, can degrade the quality of acquired data. Intensity non-uniformity consists in anatomically irrelevant intensity variation throughout data. It can be induced by the choice of the radio-frequency coil, the acquisition pulse sequence and by the nature and geometry of the sample itself. Numerous methods have been proposed to correct this artifact. In this paper, we propose an overview of existing methods. We first sort them according to their location in the acquisition/processing pipeline. Sorting is then refined based on the assumptions those methods rely on. Next, we present the validation protocols used to evaluate these different correction schemes both from a qualitative and a quantitative point of view. Finally, availability and usability of the presented methods is discussed.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.
Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick
2009-08-17
In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America
Automatic correction of dental artifacts in PET/MRI
Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune. H.; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois
2015-01-01
Abstract. A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 F18-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of 97±3% of the artifact areas. PMID:26158104
Biometrics encryption combining palmprint with two-layer error correction codes
NASA Astrophysics Data System (ADS)
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
NASA Astrophysics Data System (ADS)
Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan
2017-10-01
An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, K; Barbarits, J; Humenik, R
Purpose: Chang’s mathematical formulation is a common method of attenuation correction applied on reconstructed Jaszczak phantom images. Though Chang’s attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor’s camera software producing artifacts. The objective of this work is to ensure that Chang’s attenuation correction technique can be applied for reconstructed Jaszczak phantom images acquired in both 360° and 180° mode. Methods: The Jaszczak phantom filled with 20 mCi of diluted Tc-99m was placed on the patient table of Siemens e.cam™ (n = 2) and Siemens Symbia™ (nmore » = 1) dual head gamma cameras centered both in lateral and axial directions. A total of 3 scans were done at 180° and 2 scans at 360° orbit acquisition modes. Thirty two million counts were acquired for both modes. Reconstruction of the projection data was performed using filtered back projection smoothed with pre reconstruction Butterworth filter (order: 6, cutoff: 0.55). Reconstructed transaxial slices were attenuation corrected by Chang’s attenuation correction technique as implemented in the camera software. Corrections were also done using a modified technique where photon path lengths for all possible attenuation paths through a pixel in the image space were added to estimate the corresponding attenuation factor. The inverse of the attenuation factor was utilized to correct the attenuated pixel counts. Results: Comparable uniformity and noise were observed for 360° acquired phantom images attenuation corrected by the vendor technique (28.3% and 7.9%) and the proposed technique (26.8% and 8.4%). The difference in uniformity for 180° acquisition between the proposed technique (22.6% and 6.8%) and the vendor technique (57.6% and 30.1%) was more substantial. Conclusion: Assessment of attenuation correction performance by phantom uniformity analysis illustrated improved uniformity with the proposed algorithm compared to the camera software.« less
Yao, Tao; Yin, Shi-Min; Xiangli, Bin; Lü, Qun-Bo
2010-06-01
Based on in-depth analysis of the relative radiation scaling theorem and acquired scaling data of pixel response nonuniformity correction of CCD (charge-coupled device) in spaceborne visible interferential imaging spectrometer, a pixel response nonuniformity correction method of CCD adapted to visible and infrared interferential imaging spectrometer system was studied out, and it availably resolved the engineering technical problem of nonuniformity correction in detector arrays for interferential imaging spectrometer system. The quantitative impact of CCD nonuniformity on interferogram correction and recovery spectrum accuracy was given simultaneously. Furthermore, an improved method with calibration and nonuniformity correction done after the instrument is successfully assembled was proposed. The method can save time and manpower. It can correct nonuniformity caused by other reasons in spectrometer system besides CCD itself's nonuniformity, can acquire recalibration data when working environment is changed, and can also more effectively improve the nonuniformity calibration accuracy of interferential imaging
NASA Astrophysics Data System (ADS)
Tan, Xiangli; Yang, Jungang; Deng, Xinpu
2018-04-01
In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.
An Ensemble Method for Spelling Correction in Consumer Health Questions
Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina
2015-01-01
Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.
Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei
2013-04-01
The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.
NASA Astrophysics Data System (ADS)
Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.
2015-03-01
Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.
NASA Astrophysics Data System (ADS)
Xu, Chunmei; Huang, Fu-yu; Yin, Jian-ling; Chen, Yu-dan; Mao, Shao-juan
2016-10-01
The influence of aberration on misalignment of optical system is considered fully, the deficiencies of Gauss optical correction method is pointed, and a correction method for transmission-type misalignment optical system is proposed based on aberration theory. The variation regularity of single lens aberration caused by axial displacement is analyzed, and the aberration effect is defined. On this basis, through calculating the size of lens adjustment induced by the image position error and the magnifying rate error, the misalignment correction formula based on the constraints of the aberration is deduced mathematically. Taking the three lens collimation system for an example, the test is carried out to validate this method, and its superiority is proved.
Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda
2014-09-01
Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion estimation is feasible, and provides a promising tool for decoupling perfusion and tissue volume. Copyright © 2014 John Wiley & Sons, Ltd.
Bias correction for magnetic resonance images via joint entropy regularization.
Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang
2014-01-01
Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.
B1- non-uniformity correction of phased-array coils without measuring coil sensitivity.
Damen, Frederick C; Cai, Kejia
2018-04-18
Parallel imaging can be used to increase SNR and shorten acquisition times, albeit, at the cost of image non-uniformity. B 1 - non-uniformity correction techniques are confounded by signal that varies not only due to coil induced B 1 - sensitivity variation, but also the object's own intrinsic signal. Herein, we propose a method that makes minimal assumptions and uses only the coil images themselves to produce a single combined B 1 - non-uniformity-corrected complex image with the highest available SNR. A novel background noise classifier is used to select voxels of sufficient quality to avoid the need for regularization. Unique properties of the magnitude and phase were used to reduce the B 1 - sensitivity to two joint additive models for estimation of the B 1 - inhomogeneity. The complementary corruption of the imaged object across the coil images is used to abate individual coil correction imperfections. Results are presented from two anatomical cases: (a) an abdominal image that is challenging in both extreme B 1 - sensitivity and intrinsic tissue signal variation, and (b) a brain image with moderate B 1 - sensitivity and intrinsic tissue signal variation. A new relative Signal-to-Noise Ratio (rSNR) quality metric is proposed to evaluate the performance of the proposed method and the RF receiving coil array. The proposed method has been shown to be robust to imaged objects with widely inhomogeneous intrinsic signal, and resilient to poorly performing coil elements. Copyright © 2018. Published by Elsevier Inc.
Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System
NASA Astrophysics Data System (ADS)
Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu
2017-05-01
This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
Scene-based nonuniformity correction for airborne point target detection systems.
Zhou, Dabiao; Wang, Dejiang; Huo, Lijun; Liu, Rang; Jia, Ping
2017-06-26
Images acquired by airborne infrared search and track (IRST) systems are often characterized by nonuniform noise. In this paper, a scene-based nonuniformity correction method for infrared focal-plane arrays (FPAs) is proposed based on the constant statistics of the received radiation ratios of adjacent pixels. The gain of each pixel is computed recursively based on the ratios between adjacent pixels, which are estimated through a median operation. Then, an elaborate mathematical model describing the error propagation, derived from random noise and the recursive calculation procedure, is established. The proposed method maintains the characteristics of traditional methods in calibrating the whole electro-optics chain, in compensating for temporal drifts, and in not preserving the radiometric accuracy of the system. Moreover, the proposed method is robust since the frame number is the only variant, and is suitable for real-time applications owing to its low computational complexity and simplicity of implementation. The experimental results, on different scenes from a proof-of-concept point target detection system with a long-wave Sofradir FPA, demonstrate the compelling performance of the proposed method.
Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks
Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng
2017-01-01
High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328
Color transfer method preserving perceived lightness
NASA Astrophysics Data System (ADS)
Ueda, Chiaki; Azetsu, Tadahiro; Suetake, Noriaki; Uchino, Eiji
2016-06-01
Color transfer originally proposed by Reinhard et al. is a method to change the color appearance of an input image by using the color information of a reference image. The purpose of this study is to modify color transfer so that it works well even when the scenes of the input and reference images are not similar. Concretely, a color transfer method with lightness correction and color gamut adjustment is proposed. The lightness correction is applied to preserve the perceived lightness which is explained by the Helmholtz-Kohlrausch (H-K) effect. This effect is the phenomenon that vivid colors are perceived as brighter than dull colors with the same lightness. Hence, when the chroma is changed by image processing, the perceived lightness is also changed even if the physical lightness is preserved after the image processing. In the proposed method, by considering the H-K effect, color transfer that preserves the perceived lightness after processing is realized. Furthermore, color gamut adjustment is introduced to address the color gamut problem, which is caused by color space conversion. The effectiveness of the proposed method is verified by performing some experiments.
Harmonic source wavefront aberration correction for ultrasound imaging
Dianis, Scott W.; von Ramm, Olaf T.
2011-01-01
A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images. PMID:21303031
Total variation approach for adaptive nonuniformity correction in focal-plane arrays.
Vera, Esteban; Meza, Pablo; Torres, Sergio
2011-01-15
In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.
Wang, Qiong-Hua; Li, Xiao-Fang; Zhou, Lei; Wang, Ai-Hong; Li, Da-Hai
2011-03-01
A method is proposed to alleviate the cross talk in multiview autostereoscopic three-dimensional displays based on a lenticular sheet. We analyze the positional relationship between subpixels on the image panel and the lenticular sheet. According to this relationship, optimal synthetic images are synthesized to minimize cross talk by correcting the positions of subpixels on the image panel. Experimental results show that the proposed method significantly reduces the cross talk of view images and improves the quality of stereoscopic images. © 2010 Optical Society of America
Warped document image correction method based on heterogeneous registration strategies
NASA Astrophysics Data System (ADS)
Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan
2013-03-01
With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.
The power-proportion method for intracranial volume correction in volumetric imaging analysis.
Liu, Dawei; Johnson, Hans J; Long, Jeffrey D; Magnotta, Vincent A; Paulsen, Jane S
2014-01-01
In volumetric brain imaging analysis, volumes of brain structures are typically assumed to be proportional or linearly related to intracranial volume (ICV). However, evidence abounds that many brain structures have power law relationships with ICV. To take this relationship into account in volumetric imaging analysis, we propose a power law based method-the power-proportion method-for ICV correction. The performance of the new method is demonstrated using data from the PREDICT-HD study.
A simplified focusing and astigmatism correction method for a scanning electron microscope
NASA Astrophysics Data System (ADS)
Lu, Yihua; Zhang, Xianmin; Li, Hai
2018-01-01
Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.
Adaptive radial basis function mesh deformation using data reduction
NASA Astrophysics Data System (ADS)
Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.
2016-09-01
Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.
NASA Astrophysics Data System (ADS)
Wei, David Wei; Deegan, Anthony J.; Wang, Ruikang K.
2017-06-01
When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.
Wei, David Wei; Deegan, Anthony J; Wang, Ruikang K
2017-06-01
When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.
A Simulation Study on Methods of Correcting for the Effects of Extreme Response Style
ERIC Educational Resources Information Center
Wetzel, Eunike; Böhnke, Jan R.; Rose, Norman
2016-01-01
The impact of response styles such as extreme response style (ERS) on trait estimation has long been a matter of concern to researchers and practitioners. This simulation study investigated three methods that have been proposed for the correction of trait estimates for ERS effects: (a) mixed Rasch models, (b) multidimensional item response models,…
Wang, Bo; Bao, Jianwei; Wang, Shikui; Wang, Houjun; Sheng, Qinghong
2017-01-01
Remote sensing images could provide us with tremendous quantities of large-scale information. Noise artifacts (stripes), however, made the images inappropriate for vitalization and batch process. An effective restoration method would make images ready for further analysis. In this paper, a new method is proposed to correct the stripes and bad abnormal pixels in charge-coupled device (CCD) linear array images. The method involved a line tracing method, limiting the location of noise to a rectangular region, and corrected abnormal pixels with the Lagrange polynomial algorithm. The proposed detection and restoration method were applied to Gaofen-1 satellite (GF-1) images, and the performance of this method was evaluated by omission ratio and false detection ratio, which reached 0.6% and 0%, respectively. This method saved 55.9% of the time, compared with traditional method. PMID:28441754
Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.
Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin
2017-09-01
In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.
Detection of defects on apple using B-spline lighting correction method
NASA Astrophysics Data System (ADS)
Li, Jiangbo; Huang, Wenqian; Guo, Zhiming
To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.
Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging
NASA Astrophysics Data System (ADS)
Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.
2015-06-01
The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.
Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system
NASA Astrophysics Data System (ADS)
Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye
2017-12-01
In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.
Comparison of four statistical and machine learning methods for crash severity prediction.
Iranitalab, Amirfarrokh; Khattak, Aemal
2017-11-01
Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng
Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less
Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad
2015-01-01
Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714
Estimate of higher order ionospheric errors in GNSS positioning
NASA Astrophysics Data System (ADS)
Hoque, M. Mainul; Jakowski, N.
2008-10-01
Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.
A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.
Ahn, C B; Cho, Z H
1987-01-01
A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue Qingsheng
A low-cost, broadband, astigmatism-corrected Czerny-Turner arrangement with a fixed plane grating is proposed. A wedge cylindrical lens is used to correct astigmatism over a broadband spectral range. The principle and method of astigmatism correction are described in detail. We compare the performance of this modified Czerny-Turner arrangement with that of the traditional Czerny-Turner arrangement by using a practical Czerny-Turner imaging spectrometer example.
Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.
Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S
2017-11-01
To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
Wei, Yinsheng; Guo, Rujiang; Xu, Rongqing; Tang, Xiudong
2014-01-01
Ionospheric phase perturbation with large amplitude causes broadening sea clutter's Bragg peaks to overlap each other; the performance of traditional decontamination methods about filtering Bragg peak is poor, which greatly limits the detection performance of HF skywave radars. In view of the ionospheric phase perturbation with large amplitude, this paper proposes a cascaded approach based on improved S-method to correct the ionospheric phase contamination. This approach consists of two correction steps. At the first step, a time-frequency distribution method based on improved S-method is adopted and an optimal detection method is designed to obtain a coarse ionospheric modulation estimation from the time-frequency distribution. At the second correction step, based on the phase gradient algorithm (PGA) is exploited to eliminate the residual contamination. Finally, use the measured data to verify the effectiveness of the method. Simulation results show the time-frequency resolution of this method is high and is not affected by the interference of the cross term; ionospheric phase perturbation with large amplitude can be corrected in low signal-to-noise (SNR); such a cascade correction method has a good effect. PMID:24578656
Meng, Yuguang; Lei, Hao
2010-06-01
An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. (c) 2010 Wiley-Liss, Inc.
A new digitized reverse correction method for hypoid gears based on a one-dimensional probe
NASA Astrophysics Data System (ADS)
Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo
2017-12-01
In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.
Research and implementation of finger-vein recognition algorithm
NASA Astrophysics Data System (ADS)
Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin
2017-06-01
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census
NASA Astrophysics Data System (ADS)
Li, C.; Guo, P.; Liu, X.
2017-09-01
A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.
NASA Astrophysics Data System (ADS)
Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian
2018-02-01
For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-01-01
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-12-27
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.
Wang, Menghua; Shi, Wei; Jiang, Lide
2012-01-16
A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.
Computing correct truncated excited state wavefunctions
NASA Astrophysics Data System (ADS)
Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.
2016-12-01
We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.
NASA Astrophysics Data System (ADS)
Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji
2017-03-01
An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-05-13
Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-08-01
We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-08-01
We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.
NASA Astrophysics Data System (ADS)
Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto
2017-12-01
Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.
Robust scatter correction method for cone-beam CT using an interlacing-slit plate
NASA Astrophysics Data System (ADS)
Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long
2016-06-01
Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
Statistical testing of association between menstruation and migraine.
Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G
2015-02-01
To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.
Cement bond evaluation method in horizontal wells using segmented bond tool
NASA Astrophysics Data System (ADS)
Song, Ruolong; He, Li
2018-06-01
Most of the existing cement evaluation technologies suffer from tool eccentralization due to gravity in highly deviated wells and horizontal wells. This paper proposes a correction method to lessen the effects of tool eccentralization on evaluation results of cement bond using segmented bond tool, which has an omnidirectional sonic transmitter and eight segmented receivers evenly arranged around the tool 2 ft from the transmitter. Using 3-D finite difference parallel numerical simulation method, we investigate the logging responses of centred and eccentred segmented bond tool in a variety of bond conditions. From the numerical results, we find that the tool eccentricity and channel azimuth can be estimated from measured sector amplitude. The average of the sector amplitude when the tool is eccentred can be corrected to the one when the tool is centred. Then the corrected amplitude will be used to calculate the channel size. The proposed method is applied to both synthetic and field data. For synthetic data, it turns out that this method can estimate the tool eccentricity with small error and the bond map is improved after correction. For field data, the tool eccentricity has a good agreement with the measured well deviation angle. Though this method still suffers from the low accuracy of calculating channel azimuth, the credibility of corrected bond map is improved especially in horizontal wells. It gives us a choice to evaluate the bond condition for horizontal wells using existing logging tool. The numerical results in this paper can provide aids for understanding measurements of segmented tool in both vertical and horizontal wells.
Proof of concept of a simple computer-assisted technique for correcting bone deformities.
Ma, Burton; Simpson, Amber L; Ellis, Randy E
2007-01-01
We propose a computer-assisted technique for correcting bone deformities using the Ilizarov method. Our technique is an improvement over prior art in that it does not require a tracking system, navigation hardware and software, or intraoperative registration. Instead, we rely on a postoperative CT scan to obtain all of the information necessary to plan the correction and compute a correction schedule for the patient. Our laboratory experiments using plastic phantoms produced deformity corrections accurate to within 3.0 degrees of rotation and 1 mm of lengthening.
Bias correction for estimated QTL effects using the penalized maximum likelihood method.
Zhang, J; Yue, C; Zhang, Y-M
2012-04-01
A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.
NASA Astrophysics Data System (ADS)
Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won
2016-02-01
A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1 × 1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.
Effective classification of the prevalence of Schistosoma mansoni.
Mitchell, Shira A; Pagano, Marcello
2012-12-01
To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.
Narayanan, Balaji; Hardie, Russell C; Muse, Robert A
2005-06-10
Spatial fixed-pattern noise is a common and major problem in modern infrared imagers owing to the nonuniform response of the photodiodes in the focal plane array of the imaging system. In addition, the nonuniform response of the readout and digitization electronics, which are involved in multiplexing the signals from the photodiodes, causes further nonuniformity. We describe a novel scene based on a nonuniformity correction algorithm that treats the aggregate nonuniformity in separate stages. First, the nonuniformity from the readout amplifiers is corrected by use of knowledge of the readout architecture of the imaging system. Second, the nonuniformity resulting from the individual detectors is corrected with a nonlinear filter-based method. We demonstrate the performance of the proposed algorithm by applying it to simulated imagery and real infrared data. Quantitative results in terms of the mean absolute error and the signal-to-noise ratio are also presented to demonstrate the efficacy of the proposed algorithm. One advantage of the proposed algorithm is that it requires only a few frames to obtain high-quality corrections.
Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634
Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.
Itoh, Yuta; Klinker, Gudrun
2015-04-01
A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.
Off-resonance artifacts correction with convolution in k-space (ORACLE).
Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne
2012-06-01
Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Potential of bias correction for downscaling passive microwave and soil moisture data
USDA-ARS?s Scientific Manuscript database
Passive microwave satellites such as SMOS (Soil Moisture and Ocean Salinity) or SMAP (Soil Moisture Active Passive) observe brightness temperature (TB) and retrieve soil moisture at a spatial resolution greater than most hydrological processes. Bias correction is proposed as a simple method to disag...
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
A comparison of locally adaptive multigrid methods: LDC, FAC and FIC
NASA Technical Reports Server (NTRS)
Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul
1993-01-01
This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
Feature-based pairwise retinal image registration by radial distortion correction
NASA Astrophysics Data System (ADS)
Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.
2007-03-01
Fundus camera imaging is widely used to document disorders such as diabetic retinopathy and macular degeneration. Multiple retinal images can be combined together through a procedure known as mosaicing to form an image with a larger field of view. Mosaicing typically requires multiple pairwise registrations of partially overlapped images. We describe a new method for pairwise retinal image registration. The proposed method is unique in that the radial distortion due to image acquisition is corrected prior to the geometric transformation. Vessel lines are detected using the Hessian operator and are used as input features to the registration. Since the overlapping region is typically small in a retinal image pair, only a few correspondences are available, thus limiting the applicable model to an afine transform at best. To recover the distortion due to curved-surface of retina and lens optics, a combined approach of an afine model with a radial distortion correction is proposed. The parameters of the image acquisition and radial distortion models are estimated during an optimization step that uses Powell's method driven by the vessel line distance. Experimental results using 20 pairs of green channel images acquired from three subjects with a fundus camera confirmed that the afine model with distortion correction could register retinal image pairs to within 1.88+/-0.35 pixels accuracy (mean +/- standard deviation) assessed by vessel line error, which is 17% better than the afine-only approach. Because the proposed method needs only two correspondences, it can be applied to obtain good registration accuracy even in the case of small overlap between retinal image pairs.
Data consistency-driven scatter kernel optimization for x-ray cone-beam CT
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong
2015-08-01
Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.
Optimized distortion correction technique for echo planar imaging.
Chen , N K; Wyrwicz, A M
2001-03-01
A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.
Vibration signal correction of unbalanced rotor due to angular speed fluctuation
NASA Astrophysics Data System (ADS)
Cao, Hongrui; He, Dong; Xi, Songtao; Chen, Xuefeng
2018-07-01
The rotating speed of a rotor is hardly constant in practice due to angular speed fluctuation, which affects the balancing accuracy of the rotor. In this paper, the effect of angular speed fluctuation on vibration responses of the unbalanced rotor is analyzed quantitatively. Then, a vibration signal correction method based on zoom synchrosqueezing transform (ZST) and tacholess order tracking is proposed. The instantaneous angular speed (IAS) of the rotor is extracted by the ZST firstly and then used to calculate the instantaneous phase. The vibration signal is further resampled in angular domain to reduce the effect of angular speed fluctuation. The signal obtained in angular domain is transformed into order domain using discrete Fourier transform (DFT) to estimate the amplitude and phase of the vibration signal. Simulated and experimental results show that the proposed method can successfully correct the amplitude and phase of the vibration signal due to angular speed fluctuation.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Grayscale inhomogeneity correction method for multiple mosaicked electron microscope images
NASA Astrophysics Data System (ADS)
Zhou, Fangxu; Chen, Xi; Sun, Rong; Han, Hua
2018-04-01
Electron microscope image stitching is highly desired to acquire microscopic resolution images of large target scenes in neuroscience. However, the result of multiple Mosaicked electron microscope images may exist severe gray scale inhomogeneity due to the instability of the electron microscope system and registration errors, which degrade the visual effect of the mosaicked EM images and aggravate the difficulty of follow-up treatment, such as automatic object recognition. Consequently, the grayscale correction method for multiple mosaicked electron microscope images is indispensable in these areas. Different from most previous grayscale correction methods, this paper designs a grayscale correction process for multiple EM images which tackles the difficulty of the multiple images monochrome correction and achieves the consistency of grayscale in the overlap regions. We adjust overall grayscale of the mosaicked images with the location and grayscale information of manual selected seed images, and then fuse local overlap regions between adjacent images using Poisson image editing. Experimental result demonstrates the effectiveness of our proposed method.
How does bias correction of RCM precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.
2014-09-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
Yue, Dan; Nie, Haitao; Li, Ye; Ying, Changsheng
2018-03-01
Wavefront sensorless (WFSless) adaptive optics (AO) systems have been widely studied in recent years. To reach optimum results, such systems require an efficient correction method. This paper presents a fast wavefront correction approach for a WFSless AO system mainly based on the linear phase diversity (PD) technique. The fast closed-loop control algorithm is set up based on the linear relationship between the drive voltage of the deformable mirror (DM) and the far-field images of the system, which is obtained through the linear PD algorithm combined with the influence function of the DM. A large number of phase screens under different turbulence strengths are simulated to test the performance of the proposed method. The numerical simulation results show that the method has fast convergence rate and strong correction ability, a few correction times can achieve good correction results, and can effectively improve the imaging quality of the system while needing fewer measurements of CCD data.
Efficient bias correction for magnetic resonance image denoising.
Mukherjee, Partha Sarathi; Qiu, Peihua
2013-05-30
Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.
Extension of the Haseman-Elston regression model to longitudinal data.
Won, Sungho; Elston, Robert C; Park, Taesung
2006-01-01
We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.
Analysis and correction of gradient nonlinearity bias in ADC measurements
Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.
2013-01-01
Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533
Fringe-period selection for a multifrequency fringe-projection phase unwrapping method
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Jiang, Kejian
2016-08-01
The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.
A level set method for cupping artifact correction in cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Shipeng; Li, Haibo; Ge, Qi
2015-08-15
Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts inmore » CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.« less
A positivity-preserving, implicit defect-correction multigrid method for turbulent combustion
NASA Astrophysics Data System (ADS)
Wasserman, M.; Mor-Yossef, Y.; Greenberg, J. B.
2016-07-01
A novel, robust multigrid method for the simulation of turbulent and chemically reacting flows is developed. A survey of previous attempts at implementing multigrid for the problems at hand indicated extensive use of artificial stabilization to overcome numerical instability arising from non-linearity of turbulence and chemistry model source-terms, small-scale physics of combustion, and loss of positivity. These issues are addressed in the current work. The highly stiff Reynolds-averaged Navier-Stokes (RANS) equations, coupled with turbulence and finite-rate chemical kinetics models, are integrated in time using the unconditionally positive-convergent (UPC) implicit method. The scheme is successfully extended in this work for use with chemical kinetics models, in a fully-coupled multigrid (FC-MG) framework. To tackle the degraded performance of multigrid methods for chemically reacting flows, two major modifications are introduced with respect to the basic, Full Approximation Storage (FAS) approach. First, a novel prolongation operator that is based on logarithmic variables is proposed to prevent loss of positivity due to coarse-grid corrections. Together with the extended UPC implicit scheme, the positivity-preserving prolongation operator guarantees unconditional positivity of turbulence quantities and species mass fractions throughout the multigrid cycle. Second, to improve the coarse-grid-correction obtained in localized regions of high chemical activity, a modified defect correction procedure is devised, and successfully applied for the first time to simulate turbulent, combusting flows. The proposed modifications to the standard multigrid algorithm create a well-rounded and robust numerical method that provides accelerated convergence, while unconditionally preserving the positivity of model equation variables. Numerical simulations of various flows involving premixed combustion demonstrate that the proposed MG method increases the efficiency by a factor of up to eight times with respect to an equivalent single-grid method, and by two times with respect to an artificially-stabilized MG method.
Noise suppressed partial volume correction for cardiac SPECT/CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Chung; Liu, Chi, E-mail: chi.liu@yale.edu
Purpose: Partial volume correction (PVC) methods typically improve quantification at the expense of increased image noise and reduced reproducibility. In this study, the authors developed a novel voxel-based PVC method that incorporates anatomical knowledge to improve quantification while suppressing noise for cardiac SPECT/CT imaging. Methods: In the proposed method, the SPECT images were first reconstructed using anatomical-based maximum a posteriori (AMAP) with Bowsher’s prior to penalize noise while preserving boundaries. A sequential voxel-by-voxel PVC approach (Yang’s method) was then applied on the AMAP reconstruction using a template response. This template response was obtained by forward projecting a template derived frommore » a contrast-enhanced CT image, and then reconstructed using AMAP to model the partial volume effects (PVEs) introduced by both the system resolution and the smoothing applied during reconstruction. To evaluate the proposed noise suppressed PVC (NS-PVC), the authors first simulated two types of cardiac SPECT studies: a {sup 99m}Tc-tetrofosmin myocardial perfusion scan and a {sup 99m}Tc-labeled red blood cell (RBC) scan on a dedicated cardiac multiple pinhole SPECT/CT at both high and low count levels. The authors then applied the proposed method on a canine equilibrium blood pool study following injection with {sup 99m}Tc-RBCs at different count levels by rebinning the list-mode data into shorter acquisitions. The proposed method was compared to MLEM reconstruction without PVC, two conventional PVC methods, including Yang’s method and multitarget correction (MTC) applied on the MLEM reconstruction, and AMAP reconstruction without PVC. Results: The results showed that the Yang’s method improved quantification, however, yielded increased noise and reduced reproducibility in the regions with higher activity. MTC corrected for PVE on high count data with amplified noise, although yielded the worst performance among all the methods tested on low-count data. AMAP effectively suppressed noise and reduced the spill-in effect in the low activity regions. However it was unable to reduce the spill-out effect in high activity regions. NS-PVC yielded superior performance in terms of both quantitative assessment and visual image quality while improving reproducibility. Conclusions: The results suggest that NS-PVC may be a promising PVC algorithm for application in low-dose protocols, and in gated and dynamic cardiac studies with low counts.« less
NASA Astrophysics Data System (ADS)
Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R.
2017-06-01
Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1H MEGA-PRESS, misalignment between mean edited (ON ‾) and non-edited (OFF ‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON ‾ and OFF ‾ 1H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3 T. The results of the alignment between the mean OFF ‾ and ON ‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF ‾ spectra of Rbd = 0.997 ± 0.003 (method (b) vs. (d)), compared to Rad = 0.764 ± 0.220 (method (a) vs. (d)) with no alignment between OFF ‾ and ON ‾ . Method (c) revealed a slightly lower correlation coefficient of Rcd = 0.972 ± 0.028 compared to Rbd, that can be ascribed to small remaining subtraction artefacts in the final DIFF ‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF ‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging.
Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R
2017-06-01
Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1 H MEGA-PRESS, misalignment between mean edited (ON‾) and non-edited (OFF‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON‾ and OFF‾ 1 H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L 1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L 2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3T. The results of the alignment between the mean OFF‾ and ON‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF‾ spectra of R bd =0.997±0.003 (method (b) vs. (d)), compared to R ad =0.764±0.220 (method (a) vs. (d)) with no alignment between OFF‾ and ON‾. Method (c) revealed a slightly lower correlation coefficient of R cd =0.972±0.028 compared to R bd , that can be ascribed to small remaining subtraction artefacts in the final DIFF‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging. Copyright © 2017 Elsevier Inc. All rights reserved.
Surface corrections for peridynamic models in elasticity and fracture
NASA Astrophysics Data System (ADS)
Le, Q. V.; Bobaru, F.
2018-04-01
Peridynamic models are derived by assuming that a material point is located in the bulk. Near a surface or boundary, material points do not have a full non-local neighborhood. This leads to effective material properties near the surface of a peridynamic model to be slightly different from those in the bulk. A number of methods/algorithms have been proposed recently for correcting this peridynamic surface effect. In this study, we investigate the efficacy and computational cost of peridynamic surface correction methods for elasticity and fracture. We provide practical suggestions for reducing the peridynamic surface effect.
WE-G-18A-02: Calibration-Free Combined KV/MV Short Scan CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, M; Loo, B; Bazalova, M
Purpose: To combine orthogonal kilo-voltage (kV) and Mega-voltage (MV) projection data for short scan cone-beam CT to reduce imaging time on current radiation treatment systems, using a calibration-free gain correction method. Methods: Combining two orthogonal projection data sets for kV and MV imaging hardware can reduce the scan angle to as small as 110° (90°+fan) such that the total scan time is ∼18 seconds, or within a breath hold. To obtain an accurate reconstruction, the MV projection data is first linearly corrected using linear regression using the redundant data from the start and end of the sinogram, and then themore » combined data is reconstructed using the FDK method. To correct for the different changes of attenuation coefficients in kV/MV between soft tissue and bone, the forward projection of the segmented bone and soft tissue from the first reconstruction in the redundant region are added to the linear regression model. The MV data is corrected again using the additional information from the segmented image, and combined with kV for a second FDK reconstruction. We simulated polychromatic 120 kVp (conventional a-Si EPID with CsI) and 2.5 MVp (prototype high-DQE MV detector) projection data with Poisson noise using the XCAT phantom. The gain correction and combined kV/MV short scan reconstructions were tested with head and thorax cases, and simple contrast-to-noise ratio measurements were made in a low-contrast pattern in the head. Results: The FDK reconstruction using the proposed gain correction method can effectively reduce artifacts caused by the differences of attenuation coefficients in the kV/MV data. The CNRs of the short scans for kV, MV, and kV/MV are 5.0, 2.6 and 3.4 respectively. The proposed gain correction method also works with truncated projections. Conclusion: A novel gain correction and reconstruction method was developed to generate short scan CBCT from orthogonal kV/MV projections. This work is supported by NIH Grant 5R01CA138426-05.« less
Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.
Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban
2015-07-20
In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.
Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi
2018-05-15
Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.
Sun, You-Wen; Liu, Wen-Qing; Wang, Shi-Mei; Huang, Shu-Hua; Yu, Xiao-Man
2011-10-01
A method of interference correction for nondispersive infrared multi-component gas analysis was described. According to the successive integral gas absorption models and methods, the influence of temperature and air pressure on the integral line strengths and linetype was considered, and based on Lorentz detuning linetypes, the absorption cross sections and response coefficients of H2O, CO2, CO, and NO on each filter channel were obtained. The four dimension linear regression equations for interference correction were established by response coefficients, the absorption cross interference was corrected by solving the multi-dimensional linear regression equations, and after interference correction, the pure absorbance signal on each filter channel was only controlled by the corresponding target gas concentration. When the sample cell was filled with gas mixture with a certain concentration proportion of CO, NO and CO2, the pure absorbance after interference correction was used for concentration inversion, the inversion concentration error for CO2 is 2.0%, the inversion concentration error for CO is 1.6%, and the inversion concentration error for NO is 1.7%. Both the theory and experiment prove that the interference correction method proposed for NDIR multi-component gas analysis is feasible.
Infrared fix pattern noise reduction method based on Shearlet Transform
NASA Astrophysics Data System (ADS)
Rong, Shenghui; Zhou, Huixin; Zhao, Dong; Cheng, Kuanhong; Qian, Kun; Qin, Hanlin
2018-06-01
The non-uniformity correction (NUC) is an effective way to reduce fix pattern noise (FPN) and improve infrared image quality. The temporal high-pass NUC method is a kind of practical NUC method because of its simple implementation. However, traditional temporal high-pass NUC methods rely deeply on the scene motion and suffer image ghosting and blurring. Thus, this paper proposes an improved NUC method based on Shearlet Transform (ST). First, the raw infrared image is decomposed into multiscale and multi-orientation subbands by ST and the FPN component mainly exists in some certain high-frequency subbands. Then, high-frequency subbands are processed by the temporal filter to extract the FPN due to its low-frequency characteristics. Besides, each subband has a confidence parameter to determine the degree of FPN, which is estimated by the variance of subbands adaptively. At last, the process of NUC is achieved by subtracting the estimated FPN component from the original subbands and the corrected infrared image can be obtained by the inverse ST. The performance of the proposed method is evaluated with real and synthetic infrared image sequences thoroughly. Experimental results indicate that the proposed method can reduce heavily FPN with less roughness and RMSE.
Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi
2017-11-21
As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.
Quantum corrections to holographic mutual information
Agon, Cesar A.; Faulkner, Thomas
2016-08-22
We compute the leading contribution to the mutual information (MI) of two disjoint spheres in the large distance regime for arbitrary conformal field theories (CFT) in any dimension. This is achieved by refining the operator product expansion method introduced by Cardy [1]. For CFTs with holographic duals the leading contribution to the MI at long distances comes from bulk quantum corrections to the Ryu-Takayanagi area formula. According to the FLM proposal [2] this equals the bulk MI between the two disjoint regions spanned by the boundary spheres and their corresponding minimal area surfaces. We compute this quantum correction and providemore » in this way a non-trivial check of the FLM proposal.« less
NASA Astrophysics Data System (ADS)
Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md
2011-10-01
The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying
2011-01-01
Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3+FCM > FCM) in 2 breasts. The results of the second reading session were similar. The performance in each pairwise Wilcoxon signed-rank test is significant, showing N3+FCM superior to both N3 and FCM, and N3 superior to FCM. The performance of the new N3+FCM algorithm was comparable to that of CLIC, showing equivalent quality in 57/60 breasts. Choosing an appropriate bias field correction method is a very important preprocessing step to allow an accurate segmentation of fibroglandular tissues based on breast MRI for quantitative measurement of breast density. The proposed algorithm combining N3+FCM and CLIC both yield satisfactory results.
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John
2017-07-01
The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.
LCC demons with divergence term for liver MRI motion correction
NASA Astrophysics Data System (ADS)
Oh, Jihun; Martin, Diego; Skrinjar, Oskar
2010-03-01
Contrast-enhanced liver MR image sequences acquired at multiple times before and after contrast administration have been shown to be critically important for the diagnosis and monitoring of liver tumors and may be used for the quantification of liver inflammation and fibrosis. However, over multiple acquisitions, the liver moves and deforms due to patient and respiratory motion. In order to analyze contrast agent uptake one first needs to correct for liver motion. In this paper we present a method for the motion correction of dynamic contrastenhanced liver MR images. For this purpose we use a modified version of the Local Correlation Coefficient (LCC) Demons non-rigid registration method. Since the liver is nearly incompressible its displacement field has small divergence. For this reason we add a divergence term to the energy that is minimized in the LCC Demons method. We applied the method to four sequences of contrast-enhanced liver MR images. Each sequence had a pre-contrast scan and seven post-contrast scans. For each post-contrast scan we corrected for the liver motion relative to the pre-contrast scan. Quantitative evaluation showed that the proposed method improved the liver alignment relative to the non-corrected and translation-corrected scans and visual inspection showed no visible misalignment of the motion corrected contrast-enhanced scans and pre-contrast scan.
Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A
2018-01-01
Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Adaptive identification of vessel's added moments of inertia with program motion
NASA Astrophysics Data System (ADS)
Alyshev, A. S.; Melnikov, V. G.
2018-05-01
In this paper, we propose a new experimental method for determining the moments of inertia of the ship model. The paper gives a brief review of existing methods, a description of the proposed method and experimental stand, test procedures and calculation formulas and experimental results. The proposed method is based on the energy approach with special program motions. The ship model is fixed in a special rack consisting of a torsion element and a set of additional servo drives with flywheels (reactive wheels), which correct the motion. The servo drives with an adaptive controller provide the symmetry of the motion, which is necessary for the proposed identification procedure. The effectiveness of the proposed approach is confirmed by experimental results.
Proximity correction of high-dosed frame with PROXECCO
NASA Astrophysics Data System (ADS)
Eisenmann, Hans; Waas, Thomas; Hartmann, Hans
1994-05-01
Usefulness of electron beam lithography is strongly related to the efficiency and quality of methods used for proximity correction. This paper addresses the above issue by proposing an extension to the new proximity correction program PROXECCO. The combination of a framing step with PROXECCO produces a pattern with a very high edge accuracy and still allows usage of the fast correction procedure. Making a frame with a higher dose imitates a fine resolution correction where the coarse part is disregarded. So after handling the high resolution effect by means of framing, an additional coarse correction is still needed. Higher doses have a higher contribution to the proximity effect. This additional proximity effect is taken into account with the help of the multi-dose input of PROXECCO. The dose of the frame is variable, depending on the deposited energy coming from backscattering of the proximity. Simulation proves the very high edge accuracy of the applied method.
Dual energy approach for cone beam artifacts correction
NASA Astrophysics Data System (ADS)
Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk
2017-03-01
Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.
Aligning observed and modelled behaviour based on workflow decomposition
NASA Astrophysics Data System (ADS)
Wang, Lu; Du, YuYue; Liu, Wei
2017-09-01
When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.
Hewavitharana, Amitha K; Abu Kassim, Nur Sofiah; Shaw, Paul Nicholas
2018-06-08
With mass spectrometric detection in liquid chromatography, co-eluting impurities affect the analyte response due to ion suppression/enhancement. Internal standard calibration method, using co-eluting stable isotope labelled analogue of each analyte as the internal standard, is the most appropriate technique available to correct for these matrix effects. However, this technique is not without drawbacks, proved to be expensive because separate internal standard for each analyte is required, and the labelled compounds are expensive or require synthesising. Traditionally, standard addition method has been used to overcome the matrix effects in atomic spectroscopy and was a well-established method. This paper proposes the same for mass spectrometric detection, and demonstrates that the results are comparable to those with the internal standard method using labelled analogues, for vitamin D assay. As conventional standard addition procedure does not address procedural errors, we propose the inclusion of an additional internal standard (not co-eluting). Recoveries determined on human serum samples show that the proposed method of standard addition yields more accurate results than the internal standardisation using stable isotope labelled analogues. The precision of the proposed method of standard addition is superior to the conventional standard addition method. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mao, Cuili; Lu, Rongsheng; Liu, Zhijian
2018-07-01
In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.
NASA Astrophysics Data System (ADS)
Tu, Jihui; Sui, Haigang; Feng, Wenqing; Song, Zhina
2016-06-01
In this paper, a novel approach of building damaged detection is proposed using high resolution remote sensing images and 3D GIS-Model data. Traditional building damage detection method considers to detect damaged building due to earthquake, but little attention has been paid to analyze various building damaged types(e.g., trivial damaged, severely damaged and totally collapsed.) Therefore, we want to detect the different building damaged type using 2D and 3D feature of scenes because the real world we live in is a 3D space. The proposed method generalizes that the image geometric correction method firstly corrects the post-disasters remote sensing image using the 3D GIS model or RPC parameters, then detects the different building damaged types using the change of the height and area between the pre- and post-disasters and the texture feature of post-disasters. The results, evaluated on a selected study site of the Beichuan earthquake ruins, Sichuan, show that this method is feasible and effective in building damage detection. It has also shown that the proposed method is easily applicable and well suited for rapid damage assessment after natural disasters.
Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi
2017-05-01
Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, P; Schreibmann, E; Fox, T
2014-06-15
Purpose: Severe CT artifacts can impair our ability to accurately calculate proton range thereby resulting in a clinically unacceptable treatment plan. In this work, we investigated a novel CT artifact correction method based on a coregistered MRI and investigated its ability to estimate CT HU and proton range in the presence of severe CT artifacts. Methods: The proposed method corrects corrupted CT data using a coregistered MRI to guide the mapping of CT values from a nearby artifact-free region. First patient MRI and CT images were registered using 3D deformable image registration software based on B-spline and mutual information. Themore » CT slice with severe artifacts was selected as well as a nearby slice free of artifacts (e.g. 1cm away from the artifact). The two sets of paired MRI and CT images at different slice locations were further registered by applying 2D deformable image registration. Based on the artifact free paired MRI and CT images, a comprehensive geospatial analysis was performed to predict the correct CT HU of the CT image with severe artifact. For a proof of concept, a known artifact was introduced that changed the ground truth CT HU value up to 30% and up to 5cm error in proton range. The ability of the proposed method to recover the ground truth was quantified using a selected head and neck case. Results: A significant improvement in image quality was observed visually. Our proof of concept study showed that 90% of area that had 30% errors in CT HU was corrected to 3% of its ground truth value. Furthermore, the maximum proton range error up to 5cm was reduced to 4mm error. Conclusion: MRI based CT artifact correction method can improve CT image quality and proton range calculation for patients with severe CT artifacts.« less
A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-01-01
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-06-16
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
NASA Astrophysics Data System (ADS)
Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan
2015-10-01
Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.
75 FR 48743 - Mandatory Reporting of Greenhouse Gases
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-11
...EPA is proposing to amend specific provisions in the GHG reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These proposed changes include providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources in a facility, amending data reporting requirements to provide additional clarity on when different types of GHG emissions need to be calculated and reported, clarifying terms and definitions in certain equations, and technical corrections.
NASA Astrophysics Data System (ADS)
Kazakis, Nikolaos A.
2018-01-01
The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.
Reconstruction of sound source signal by analytical passive TR in the environment with airflow
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu
2017-03-01
In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.
NASA Astrophysics Data System (ADS)
Yoshioka, Masahiro; Sato, Sojun; Kikuchi, Tsuneo; Matsuda, Yoichi
2006-05-01
In this study, the influence of ultrasonic nonlinear propagation on hydrophone calibration by the two-transducer reciprocity method is investigated quantitatively using the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation. It is proposed that the correction for the diffraction and attenuation of ultrasonic waves used in two-transducer reciprocity calibration can be derived using the KZK equation to remove the influence of nonlinear propagation. The validity of the correction is confirmed by comparing the sensitivities calibrated by the two-transducer reciprocity method and laser interferometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messud, J.; Dinh, P. M.; Suraud, Eric
2009-10-15
We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent 'generalized SIC-OEP'. A straightforward approximation, using the spatial localization of one set of orbitals, leads to the 'generalized SIC-Slater' formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.
NASA Astrophysics Data System (ADS)
Messud, J.; Dinh, P. M.; Reinhard, P.-G.; Suraud, Eric
2009-10-01
We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent “generalized SIC-OEP.” A straightforward approximation, using the spatial localization of one set of orbitals, leads to the “generalized SIC-Slater” formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.
Method for measuring multiple scattering corrections between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
How does bias correction of regional climate model precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.
2015-02-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.
A symmetric multivariate leakage correction for MEG connectomes
Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.
2015-01-01
Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259
Barrenechea, Gabriel R; Burman, Erik; Karakatsani, Fotini
2017-01-01
For the case of approximation of convection-diffusion equations using piecewise affine continuous finite elements a new edge-based nonlinear diffusion operator is proposed that makes the scheme satisfy a discrete maximum principle. The diffusion operator is shown to be Lipschitz continuous and linearity preserving. Using these properties we provide a full stability and error analysis, which, in the diffusion dominated regime, shows existence, uniqueness and optimal convergence. Then the algebraic flux correction method is recalled and we show that the present method can be interpreted as an algebraic flux correction method for a particular definition of the flux limiters. The performance of the method is illustrated on some numerical test cases in two space dimensions.
Adaptive correction of ensemble forecasts
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane
2017-04-01
Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
Multi-frame knowledge based text enhancement for mobile phone captured videos
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-02-01
In this study, we explore automated text recognition and enhancement using mobile phone captured videos of store receipts. We propose a method which includes Optical Character Resolution (OCR) enhanced by our proposed Row Based Multiple Frame Integration (RB-MFI), and Knowledge Based Correction (KBC) algorithms. In this method, first, the trained OCR engine is used for recognition; then, the RB-MFI algorithm is applied to the output of the OCR. The RB-MFI algorithm determines and combines the most accurate rows of the text outputs extracted by using OCR from multiple frames of the video. After RB-MFI, KBC algorithm is applied to these rows to correct erroneous characters. Results of the experiments show that the proposed video-based approach which includes the RB-MFI and the KBC algorithm increases the word character recognition rate to 95%, and the character recognition rate to 98%.
NASA Astrophysics Data System (ADS)
Hakim, Lukmanul; Kubokawa, Junji; Yorino, Naoto; Zoka, Yoshifumi; Sasaki, Yutaka
Advancements have been made towards inclusion of both static and dynamic security into transfer capability calculation. However, to the authors' knowledge, work on considering corrective controls into the calculation has not been reported yet. Therefore, we propose a Total Transfer Capability (TTC) assessment considering transient stability corrective controls. The method is based on the Newton interior point method for nonlinear programming and transfer capability is approached as a maximization of power transfer with both static and transient stability constraints are incorporated into our Transient Stability Constrained Optimal Power Flow (TSCOPF) formulation. An interconnected power system is simulated to be subjected to a severe unbalanced 3-phase 4-line to ground fault and following the fault, generator and load are shed in a pre-defined sequence to mimic actual corrective controls. In a deregulated electricity market, both generator companies and large load customers are encouraged to actively participate in maintaining power system stability as corrective controls upon agreement of compensation for being shed following a disturbance. Implementation of this proposal on the actual power system operation should be carried out through combining it with the existing transient stabilization controller system. Utilization of these corrective controls results in increasing TTC as suggested in our numerical simulation. As Lagrange multipliers can also describe sensitivity of both inequality and equality constraints to the objective function, then selection of which generator or load to be shed can be carried out on the basis of values of Lagrange multipliers of its respective generator's rotor angle stability and active power balance equation. Hence, the proposal in this paper can be utilized by system operator to assess the maximum TTC for specific loads and network conditions.
Optimal energy-splitting method for an open-loop liquid crystal adaptive optics system.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Liu, Yonggang; Peng, Zenghui; Yang, Qingyun; Meng, Haoran; Yao, Lishuang; Xuan, Li
2012-08-13
A waveband-splitting method is proposed for open-loop liquid crystal adaptive optics systems (LC AOSs). The proposed method extends the working waveband, splits energy flexibly, and improves detection capability. Simulated analysis is performed for a waveband in the range of 350 nm to 950 nm. The results show that the optimal energy split is 7:3 for the wavefront sensor (WFS) and for the imaging camera with the waveband split into 350 nm to 700 nm and 700 nm to 950 nm, respectively. A validation experiment is conducted by measuring the signal-to-noise ratio (SNR) of the WFS and the imaging camera. The results indicate that for the waveband-splitting method, the SNR of WFS is approximately equal to that of the imaging camera with a variation in the intensity. On the other hand, the SNR of the WFS is significantly different from that of the imaging camera for the polarized beam splitter energy splitting scheme. Therefore, the waveband-splitting method is more suitable for an open-loop LC AOS. An adaptive correction experiment is also performed on a 1.2-meter telescope. A star with a visual magnitude of 4.45 is observed and corrected and an angular resolution ability of 0.31″ is achieved. A double star with a combined visual magnitude of 4.3 is observed as well, and its two components are resolved after correction. The results indicate that the proposed method can significantly improve the detection capability of an open-loop LC AOS.
Connection equation and shaly-sand correction for electrical resistivity
Lee, Myung W.
2011-01-01
Estimating the amount of conductive and nonconductive constituents in the pore space of sediments by using electrical resistivity logs generally loses accuracy where clays are present in the reservoir. Many different methods and clay models have been proposed to account for the conductivity of clay (termed the shaly-sand correction). In this study, the connectivity equation (CE), which is a new approach to model non-Archie rocks, is used to correct for the clay effect and is compared with results using the Waxman and Smits method. The CE presented here requires no parameters other than an adjustable constant, which can be derived from the resistivity of water-saturated sediments. The new approach was applied to estimate water saturation of laboratory data and to estimate gas hydrate saturations at the Mount Elbert well on the Alaska North Slope. Although not as accurate as the Waxman and Smits method to estimate water saturations for the laboratory measurements, gas hydrate saturations estimated at the Mount Elbert well using the proposed CE are comparable to estimates from the Waxman and Smits method. Considering its simplicity, it has high potential to be used to account for the clay effect on electrical resistivity measurement in other systems.
NASA Astrophysics Data System (ADS)
Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun
2017-12-01
For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.
Correction of phase-shifting error in wavelength scanning digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Wang, Jie; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-05-01
Digital holographic microscopy is a promising method for measuring complex micro-structures with high slopes. A quasi-common path interferometric apparatus is adopted to overcome environmental disturbances, and an acousto-optic tunable filter is used to obtain multi-wavelength holograms. However, the phase shifting error caused by the acousto-optic tunable filter reduces the measurement accuracy and, in turn, the reconstructed topographies are erroneous. In this paper, an accurate reconstruction approach is proposed. It corrects the phase-shifting errors by minimizing the difference between the ideal interferograms and the recorded ones. The restriction on the step number and uniformity of the phase shifting is relaxed in the interferometry, and the measurement accuracy for complex surfaces can also be improved. The universality and superiority of the proposed method are demonstrated by practical experiments and comparison to other measurement methods.
Boonyasit, Yuwadee; Laiwattanapaisal, Wanida
2015-01-01
A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.
Communication: Finite size correction in periodic coupled cluster theory calculations of solids.
Liao, Ke; Grüneis, Andreas
2016-10-14
We present a method to correct for finite size errors in coupled cluster theory calculations of solids. The outlined technique shares similarities with electronic structure factor interpolation methods used in quantum Monte Carlo calculations. However, our approach does not require the calculation of density matrices. Furthermore we show that the proposed finite size corrections achieve chemical accuracy in the convergence of second-order Møller-Plesset perturbation and coupled cluster singles and doubles correlation energies per atom for insulating solids with two atomic unit cells using 2 × 2 × 2 and 3 × 3 × 3 k-point meshes only.
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy.
Boswell, Sarah A; Jeraj, Robert; Ruchala, Kenneth J; Olivera, Gustavo H; Jaradat, Hazim A; James, Joshua A; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T Rock
2005-06-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle.
a New Color Correction Method for Underwater Imaging
NASA Astrophysics Data System (ADS)
Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L.
2015-04-01
Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting application of this assumption is performed in the Ruderman opponent color space lαβ, used in a previous work for hue correction of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic components. In this work, we present the first proposal for color correction of underwater images by using lαβ color space. In particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low computational cost it is suitable for real-time implementation.
Summary of Comments on Test Methods Amendments Proposed in the Federal Register on August 27, 1997
(EPA) proposed amendments to 40 CFR Parts 60, 61, and 63 to reflect miscellaneous editorial changes and technical corrections throughout the parts in sections pertaining to source testing or monitoring of emissions and operations and added Performance Spec
Secondary iris recognition method based on local energy-orientation feature
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing
2015-01-01
This paper proposes a secondary iris recognition based on local features. The application of the energy-orientation feature (EOF) by two-dimensional Gabor filter to the extraction of the iris goes before the first recognition by the threshold of similarity, which sets the whole iris database into two categories-a correctly recognized class and a class to be recognized. Therefore, the former are accepted and the latter are transformed by histogram to achieve an energy-orientation histogram feature (EOHF), which is followed by a second recognition with the chi-square distance. The experiment has proved that the proposed method, because of its higher correct recognition rate, could be designated as the most efficient and effective among its companion studies in iris recognition algorithms.
Sensitivity estimation in time-of-flight list-mode positron emission tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herraiz, J. L.; Sitek, A., E-mail: sarkadiu@gmail.com
Purpose: An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data,more » which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. Methods: The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. Results: The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. Conclusions: A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.« less
NASA Astrophysics Data System (ADS)
Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi
On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.
Concave 1-norm group selection
Jiang, Dingfeng; Huang, Jian
2015-01-01
Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206
Color correction with blind image restoration based on multiple images using a low-rank model
NASA Astrophysics Data System (ADS)
Li, Dong; Xie, Xudong; Lam, Kin-Man
2014-03-01
We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.
Blind multirigid retrospective motion correction of MR images.
Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard
2015-04-01
Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.
Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi
2015-09-01
Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.
A modified adjoint-based grid adaptation and error correction method for unstructured grid
NASA Astrophysics Data System (ADS)
Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi
2018-05-01
Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.
Wang, Li; Li, Gang; Adeli, Ehsan; Liu, Mingxia; Wu, Zhengwang; Meng, Yu; Lin, Weili; Shen, Dinggang
2018-06-01
Tissue segmentation of infant brain MRIs with risk of autism is critically important for characterizing early brain development and identifying biomarkers. However, it is challenging due to low tissue contrast caused by inherent ongoing myelination and maturation. In particular, at around 6 months of age, the voxel intensities in both gray matter and white matter are within similar ranges, thus leading to the lowest image contrast in the first postnatal year. Previous studies typically employed intensity images and tentatively estimated tissue probabilities to train a sequence of classifiers for tissue segmentation. However, the important prior knowledge of brain anatomy is largely ignored during the segmentation. Consequently, the segmentation accuracy is still limited and topological errors frequently exist, which will significantly degrade the performance of subsequent analyses. Although topological errors could be partially handled by retrospective topological correction methods, their results may still be anatomically incorrect. To address these challenges, in this article, we propose an anatomy-guided joint tissue segmentation and topological correction framework for isointense infant MRI. Particularly, we adopt a signed distance map with respect to the outer cortical surface as anatomical prior knowledge, and incorporate such prior information into the proposed framework to guide segmentation in ambiguous regions. Experimental results on the subjects acquired from National Database for Autism Research demonstrate the effectiveness to topological errors and also some levels of robustness to motion. Comparisons with the state-of-the-art methods further demonstrate the advantages of the proposed method in terms of both segmentation accuracy and topological correctness. © 2018 Wiley Periodicals, Inc.
Identification of spilled oils by NIR spectroscopy technology based on KPCA and LSSVM
NASA Astrophysics Data System (ADS)
Tan, Ailing; Bi, Weihong
2011-08-01
Oil spills on the sea surface are seen relatively often with the development of the petroleum exploitation and transportation of the sea. Oil spills are great threat to the marine environment and the ecosystem, thus the oil pollution in the ocean becomes an urgent topic in the environmental protection. To develop the oil spill accident treatment program and track the source of the spilled oils, a novel qualitative identification method combined Kernel Principal Component Analysis (KPCA) and Least Square Support Vector Machine (LSSVM) was proposed. The proposed method adapt Fourier transform NIR spectrophotometer to collect the NIR spectral data of simulated gasoline, diesel fuel and kerosene oil spills samples and do some pretreatments to the original spectrum. We use the KPCA algorithm which is an extension of Principal Component Analysis (PCA) using techniques of kernel methods to extract nonlinear features of the preprocessed spectrum. Support Vector Machines (SVM) is a powerful methodology for solving spectral classification tasks in chemometrics. LSSVM are reformulations to the standard SVMs which lead to solving a system of linear equations. So a LSSVM multiclass classification model was designed which using Error Correcting Output Code (ECOC) method borrowing the idea of error correcting codes used for correcting bit errors in transmission channels. The most common and reliable approach to parameter selection is to decide on parameter ranges, and to then do a grid search over the parameter space to find the optimal model parameters. To test the proposed method, 375 spilled oil samples of unknown type were selected to study. The optimal model has the best identification capabilities with the accuracy of 97.8%. Experimental results show that the proposed KPCA plus LSSVM qualitative analysis method of near infrared spectroscopy has good recognition result, which could work as a new method for rapid identification of spilled oils.
Leung, Chung-Chu
2006-03-01
Digital subtraction radiography requires close matching of the contrast in each pair of X-ray images to be subtracted. Previous studies have shown that nonparametric contrast/brightness correction methods using the cumulative density function (CDF) and its improvements, which are based on gray-level transformation associated with the pixel histogram, perform well in uniform contrast/brightness difference conditions. However, for radiographs with nonuniform contrast/ brightness, the CDF produces unsatisfactory results. In this paper, we propose a new approach in contrast correction based on the generalized fuzzy operator with least square method. The result shows that 50% of the contrast/brightness errors can be corrected using this approach when the contrast/brightness difference between a radiographic pair is 10 U. A comparison of our approach with that of CDF is presented, and this modified GFO method produces better contrast normalization results than the CDF approach.
Real-Time Phase Correction Based on FPGA in the Beam Position and Phase Measurement System
NASA Astrophysics Data System (ADS)
Gao, Xingshun; Zhao, Lei; Liu, Jinxin; Jiang, Zouyi; Hu, Xiaofang; Liu, Shubin; An, Qi
2016-12-01
A fully digital beam position and phase measurement (BPPM) system was designed for the linear accelerator (LINAC) in Accelerator Driven Sub-critical System (ADS) in China. Phase information is obtained from the summed signals from four pick-ups of the Beam Position Monitor (BPM). Considering that the delay variations of different analog circuit channels would introduce phase measurement errors, we propose a new method to tune the digital waveforms of four channels before summation and achieve real-time error correction. The process is based on the vector rotation method and implemented within one single Field Programmable Gate Array (FPGA) device. Tests were conducted to evaluate this correction method and the results indicate that a phase correction precision better than ± 0.3° over the dynamic range from -60 dBm to 0 dBm is achieved.
Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
Tian, Yuan; Guan, Tao; Wang, Cheng
2010-01-01
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-07-01
In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.
Forty-five degree cutting septoplasty.
Hsiao, Yen-Chang; Chang, Chun-Shin; Chuang, Shiow-Shuh; Kolios, Georgios; Abdelrahman, Mohamed
2016-01-01
The crooked nose represents a challenge for rhinoplasty surgeons, and many methods have been proposed for management; however, there is no ideal method for treatment. Accordingly, the 45° cutting septoplasty technique, which involves a 45° cut at the junction of the L-shaped strut and repositioning it to achieve a straight septum is proposed. From October 2010 to September 2014, 43 patients underwent the 45° cutting septoplasty technique. There were 28 men and 15 women, with ages ranging from 20 to 58 years (mean, 33 years). Standardized photographs were obtained at every visit. Established photogrammetric parameters were used to describe the degree of correction: Correction rate = (preoperative total deviation - postoperative residual deviation)/preoperative total deviation × 100% was proposed. The mean follow-up period for all patients was 12.3 months. The mean preoperative deviation was 64.3° and the mean postoperative deviation was 2.7°; the overall correction rate was 95.8%. One patient experienced composite implant deviation two weeks postoperatively and underwent revision rhinoplasty. There were no infections, hematomas or postoperative bleeding. Based on the clinical observation of all patients during the follow-up period, the 45° cutting septoplasty technique was shown to be effective for the treatment of crooked nose.
NASA Astrophysics Data System (ADS)
Usman, M.; Atkinson, D.; Heathfield, E.; Greil, G.; Schaeffter, T.; Prieto, C.
2015-04-01
Two major challenges in cardiovascular MRI are long scan times due to slow MR acquisition and motion artefacts due to respiratory motion. Recently, a Motion Corrected-Compressed Sensing (MC-CS) technique has been proposed for free breathing 2D dynamic cardiac MRI that addresses these challenges by simultaneously accelerating MR acquisition and correcting for any arbitrary motion in a compressed sensing reconstruction. In this work, the MC-CS framework is combined with parallel imaging for further acceleration, and is termed Motion Corrected Sparse SENSE (MC-SS). Validation of the MC-SS framework is demonstrated in eight volunteers and three patients for left ventricular functional assessment and results are compared with the breath-hold acquisitions as reference. A non-significant difference (P > 0.05) was observed in the volumetric functional measurements (end diastolic volume, end systolic volume, ejection fraction) and myocardial border sharpness values obtained with the proposed and gold standard methods. The proposed method achieves whole heart multi-slice coverage in 2 min under free breathing acquisition eliminating the time needed between breath-holds for instructions and recovery. This results in two-fold speed up of the total acquisition time in comparison to the breath-hold acquisition.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya
2017-12-01
We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.
Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.
Kim, Dong Sik
2016-08-01
The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .
Ensemble density variational methods with self- and ghost-interaction-corrected functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastorczak, Ewa; Pernal, Katarzyna, E-mail: pernalk@gmail.com
2014-05-14
Ensemble density functional theory (DFT) offers a way of predicting excited-states energies of atomic and molecular systems without referring to a density response function. Despite a significant theoretical work, practical applications of the proposed approximations have been scarce and they do not allow for a fair judgement of the potential usefulness of ensemble DFT with available functionals. In the paper, we investigate two forms of ensemble density functionals formulated within ensemble DFT framework: the Gross, Oliveira, and Kohn (GOK) functional proposed by Gross et al. [Phys. Rev. A 37, 2809 (1988)] alongside the orbital-dependent eDFT form of the functional introducedmore » by Nagy [J. Phys. B 34, 2363 (2001)] (the acronym eDFT proposed in analogy to eHF – ensemble Hartree-Fock method). Local and semi-local ground-state density functionals are employed in both approaches. Approximate ensemble density functionals contain not only spurious self-interaction but also the so-called ghost-interaction which has no counterpart in the ground-state DFT. We propose how to correct the GOK functional for both kinds of interactions in approximations that go beyond the exact-exchange functional. Numerical applications lead to a conclusion that functionals free of the ghost-interaction by construction, i.e., eDFT, yield much more reliable results than approximate self- and ghost-interaction-corrected GOK functional. Additionally, local density functional corrected for self-interaction employed in the eDFT framework yields excitations energies of the accuracy comparable to that of the uncorrected semi-local eDFT functional.« less
Standing on the shoulders of giants: improving medical image segmentation via bias correction.
Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul
2010-01-01
We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.
2014-01-01
Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G. J.; Cheng, Y.; He, K. B.; Duan, F. K.; Ma, Y. L.
2014-07-01
The Sunset semi-continuous carbon analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, in this study we identified a new type of SCCA calculation discrepancy caused by the default multipoint baseline correction method. When exceeding a certain threshold carbon load, multipoint correction could cause significant total carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples, with two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments) (i.e., IMPshort and IMPlong) and one NIOSH (National Institute for Occupational Safety and Health)-like protocol (rtNIOSH). For ambient samples, the IMPshort, IMPlong and rtNIOSH protocol underestimated 22, 36 and 12% of TC, respectively, with the corresponding threshold being ~ 0, 20 and 25 μgC. For sucrose, however, such discrepancy was observed only with the IMPshort protocol, indicating the need of more refractory SCCA calibration substance. Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. The correction method proposed was to use multipoint-corrected data when below the determined threshold, and use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory
Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.
2014-01-01
Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292
Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamiya, Muneaki; Hirata, So; Valiev, Marat
2008-02-19
Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of themore » cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.« less
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
Pavement crack detection combining non-negative feature with fast LoG in complex scene
NASA Astrophysics Data System (ADS)
Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu
2015-12-01
Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.
Skin image illumination modeling and chromophore identification for melanoma diagnosis
NASA Astrophysics Data System (ADS)
Liu, Zhao; Zerubia, Josiane
2015-05-01
The presence of illumination variation in dermatological images has a negative impact on the automatic detection and analysis of cutaneous lesions. This paper proposes a new illumination modeling and chromophore identification method to correct lighting variation in skin lesion images, as well as to extract melanin and hemoglobin concentrations of human skin, based on an adaptive bilateral decomposition and a weighted polynomial curve fitting, with the knowledge of a multi-layered skin model. Different from state-of-the-art approaches based on the Lambert law, the proposed method, considering both specular reflection and diffuse reflection of the skin, enables us to address highlight and strong shading effects usually existing in skin color images captured in an uncontrolled environment. The derived melanin and hemoglobin indices, directly relating to the pathological tissue conditions, tend to be less influenced by external imaging factors and are more efficient in describing pigmentation distributions. Experiments show that the proposed method gave better visual results and superior lesion segmentation, when compared to two other illumination correction algorithms, both designed specifically for dermatological images. For computer-aided diagnosis of melanoma, sensitivity achieves 85.52% when using our chromophore descriptors, which is 8~20% higher than those derived from other color descriptors. This demonstrates the benefit of the proposed method for automatic skin disease analysis.
A 3D inversion for all-space magnetotelluric data with static shift correction
NASA Astrophysics Data System (ADS)
Zhang, Kun
2017-04-01
Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.
Segmentation-free empirical beam hardening correction for CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schüller, Sören; Sawall, Stefan; Stannigel, Kai
2015-02-15
Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. Results: All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. Conclusions: sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.« less
NASA Astrophysics Data System (ADS)
Shabani, H.; Doblas, A.; Saavedra, G.; Preza, C.
2018-02-01
The restored images in structured illumination microscopy (SIM) can be affected by residual fringes due to a mismatch between the illumination pattern and the sinusoidal model assumed by the restoration method. When a Fresnel biprism is used to generate a structured pattern, this pattern cannot be described by a pure sinusoidal function since it is distorted by an envelope due to the biprism's edge. In this contribution, we have investigated the effect of the envelope on the restored SIM images and propose a computational method in order to address it. The proposed approach to reduce the effect of the envelope consists of two parts. First, the envelope of the structured pattern, determined through calibration data, is removed from the raw SIM data via a preprocessing step. In the second step, a notch filter is applied to the images, which are restored using the well-known generalized Wiener filter, to filter any residual undesired fringes. The performance of our approach has been evaluated numerically by simulating the effect of the envelope on synthetic forward images of a 6-μm spherical bead generated using the real pattern and then restored using the SIM approach that is based on an ideal pure sinusoidal function before and after our proposed correction method. The simulation result shows 74% reduction in the contrast of the residual pattern when the proposed method is applied. Experimental results from a pollen grain sample also validate the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less
NASA Astrophysics Data System (ADS)
Woo, K. M.; Yu, S. S.; Barnard, J. J.
2013-06-01
It is well known that the imperfection of pulse power sources that drive the linear induction accelerators can lead to time-varying fluctuation in the accelerating voltages, which in turn leads to longitudinal emittance growth. We show that this source of emittance growth is correctable, even in space-charge dominated beams with significant transients induced by space-charge waves. Two correction methods are proposed, and their efficacy in reducing longitudinal emittance is demonstrated with three-dimensional particle-in-cell simulations.
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Can the behavioral sciences self-correct? A social epistemic study.
Romero, Felipe
2016-12-01
Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon a correct estimate or not depending on the social structure of the community that uses it. Based on this study, I argue that methodological explanations of the "replicability crisis" in psychology are limited and propose an alternative explanation in terms of biases. Finally, I conclude suggesting that scientific self-correction should be understood as an interaction effect between inference methods and social structures. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; de Wulf, Robert R.; van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Clercq, Eva M.; Ou, Xiaokun
2011-01-01
Mapping of vegetation using remote sensing in mountainous areas is considerably hampered by topographic effects on the spectral response pattern. A variety of topographic normalization techniques have been proposed to correct these illumination effects due to topography. The purpose of this study was to compare six different topographic normalization methods (Cosine correction, Minnaert correction, C-correction, Sun-canopy-sensor correction, two-stage topographic normalization, and slope matching technique) for their effectiveness in enhancing vegetation classification in mountainous environments. Since most of the vegetation classes in the rugged terrain of the Lancang Watershed (China) did not feature a normal distribution, artificial neural networks (ANNs) were employed as a classifier. Comparing the ANN classifications, none of the topographic correction methods could significantly improve ETM+ image classification overall accuracy. Nevertheless, at the class level, the accuracy of pine forest could be increased by using topographically corrected images. On the contrary, oak forest and mixed forest accuracies were significantly decreased by using corrected images. The results also showed that none of the topographic normalization strategies was satisfactorily able to correct for the topographic effects in severely shadowed areas.
Error correcting circuit design with carbon nanotube field effect transistors
NASA Astrophysics Data System (ADS)
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
A beam hardening and dispersion correction for x-ray dark-field radiography.
Pelzer, Georg; Anton, Gisela; Horn, Florian; Rieger, Jens; Ritter, André; Wandner, Johannes; Weber, Thomas; Michel, Thilo
2016-06-01
X-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed. The experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image. Comparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased. The authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis.
Predicting Correctness of Problem Solving from Low-Level Log Data in Intelligent Tutoring Systems
ERIC Educational Resources Information Center
Cetintas, Suleyman; Si, Luo; Xin, Yan Ping; Hord, Casey
2009-01-01
This paper proposes a learning based method that can automatically determine how likely a student is to give a correct answer to a problem in an intelligent tutoring system. Only log files that record students' actions with the system are used to train the model, therefore the modeling process doesn't require expert knowledge for identifying…
Correction on the distortion of Scheimpflug imaging for dynamic central corneal thickness
NASA Astrophysics Data System (ADS)
Li, Tianjie; Tian, Lei; Wang, Like; Hon, Ying; Lam, Andrew K. C.; Huang, Yifei; Wang, Yuanyuan; Zheng, Yongping
2015-05-01
The measurement of central corneal thickness (CCT) is important in ophthalmology. Most studies concerned the value at normal status, while rare ones focused on its dynamic changing. The commercial Corvis ST is the only commercial device currently available to visualize the two-dimensional image of dynamic corneal profiles during an air puff indentation. However, the directly observed CCT involves the Scheimpflug distortion, thus misleading the clinical diagnosis. This study aimed to correct the distortion for better measuring the dynamic CCTs. The optical path was first derived to consider the influence of factors on the use of Covis ST. A correction method was then proposed to estimate the CCT at any time during air puff indentation. Simulation results demonstrated the feasibility of the intuitive-feasible calibration for measuring the stationary CCT and indicated the necessity of correction when air puffed. Experiments on three contact lenses and four human corneas verified the prediction that the CCT would be underestimated when the improper calibration was conducted for air and overestimated when it was conducted on contact lenses made of polymethylmethacrylate. Using the proposed method, the CCT was finally observed to increase by 66±34 μm at highest concavity in 48 normal human corneas.
Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Xiaoyuan; Du, Liang; Duan, Dongliang
GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less
Hu, Min-Chun; Cheng, Ming-Hsun; Lan, Kun-Chan
2016-01-01
An automatic tongue diagnosis framework is proposed to analyze tongue images taken by smartphones. Different from conventional tongue diagnosis systems, our input tongue images are usually in low resolution and taken under unknown lighting conditions. Consequently, existing tongue diagnosis methods cannot be directly applied to give accurate results. We use the SVM (support vector machine) to predict the lighting condition and the corresponding color correction matrix according to the color difference of images taken with and without flash. We also modify the state-of-the-art work of fur and fissure detection for tongue images by taking hue information into consideration and adding a denoising step. Our method is able to correct the color of tongue images under different lighting conditions (e.g. fluorescent, incandescent, and halogen illuminant) and provide a better accuracy in tongue features detection with less processing complexity than the prior work. In this work, we proposed an automatic tongue diagnosis framework which can be applied to smartphones. Unlike the prior work which can only work in a controlled environment, our system can adapt to different lighting conditions by employing a novel color correction parameter estimation scheme.
Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach
Fan, Xiaoyuan; Du, Liang; Duan, Dongliang
2017-02-01
GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less
NASA Astrophysics Data System (ADS)
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-01
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-13
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available Radio Frequency Identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less Mean Squared Error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments. PMID:22346641
Improved Kalman filter method for measurement noise reduction in multi sensor RFID systems.
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available radio frequency identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less mean squared error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.
Automated 3D Ultrasound Image Segmentation to Aid Breast Cancer Image Interpretation
Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Yuan, Jie; Wang, Xueding; Carson, Paul L.
2015-01-01
Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117
Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer
NASA Astrophysics Data System (ADS)
Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.
2016-04-01
Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.
A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao
2018-05-01
This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.
Joshi, Vinayak S; Reinhardt, Joseph M; Garvin, Mona K; Abramoff, Michael D
2014-01-01
The separation of the retinal vessel network into distinct arterial and venous vessel trees is of high interest. We propose an automated method for identification and separation of retinal vessel trees in a retinal color image by converting a vessel segmentation image into a vessel segment map and identifying the individual vessel trees by graph search. Orientation, width, and intensity of each vessel segment are utilized to find the optimal graph of vessel segments. The separated vessel trees are labeled as primary vessel or branches. We utilize the separated vessel trees for arterial-venous (AV) classification, based on the color properties of the vessels in each tree graph. We applied our approach to a dataset of 50 fundus images from 50 subjects. The proposed method resulted in an accuracy of 91.44% correctly classified vessel pixels as either artery or vein. The accuracy of correctly classified major vessel segments was 96.42%.
NASA Astrophysics Data System (ADS)
Roccia, S.; Gaulard, C.; Étilé, A.; Chakma, R.
2017-07-01
In the context of nuclear orientation, we propose a new method to correct the multipole mixing ratios for asymmetries in the geometry of the setup but also in the detection system. This method is also robust against temperature fluctuations, beam intensity fluctuations and uncertainties in the nuclear structure of the nuclei. Additionally, this method provides a natural way to combine data from different detectors and make good use of all available statistics. We could use this method to demonstrate the accuracy that can be reached with the PolarEx setup now installed at the ALTO facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; van Dam, Hubertus JJ; Pittner, Jiri
2012-03-28
The recently proposed Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] to approximate Multi-Reference Coupled Cluster (MRCC) energies can be commonly applied to any type of MRCC theory based on the Jeziorski-Monkhorst [B. Jeziorski, H.J. Monkhorst, Phys. Rev. A 24, 1668 (1981)] exponential Ansatz. In this letter we report on the performance of a simple USS correction to the Brillouin-Wigner MRCC (BW-MRCC) formalism employing single and double excitations (BW-MRCCSD). It is shown that the resulting formalism (USS-BW-MRCCSD), which uses the manifold of single and double excitations to construct the correction, can be related to a posteriorimore » corrections utilized in routine BW-MRCCSD calculations. In several benchmark calculations we compare the results of the USS-BW-MRCCSD method with results of the BW-MRCCSD approach employing a posteriori corrections and with results obtained with the Full Configuration Interaction (FCI) method.« less
Curvature correction of retinal OCTs using graph-based geometry detection
NASA Astrophysics Data System (ADS)
Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan
2013-05-01
In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.
Model-based sphere localization (MBSL) in x-ray projections
NASA Astrophysics Data System (ADS)
Sawall, Stefan; Maier, Joscha; Leinweber, Carsten; Funck, Carsten; Kuntz, Jan; Kachelrieß, Marc
2017-08-01
The detection of spherical markers in x-ray projections is an important task in a variety of applications, e.g. geometric calibration and detector distortion correction. Therein, the projection of the sphere center on the detector is of particular interest as the used spherical beads are no ideal point-like objects. Only few methods have been proposed to estimate this respective position on the detector with sufficient accuracy and surrogate positions, e.g. the center of gravity, are used, impairing the results of subsequent algorithms. We propose to estimate the projection of the sphere center on the detector using a simulation-based method matching an artificial projection to the actual measurement. The proposed algorithm intrinsically corrects for all polychromatic effects included in the measurement and absent in the simulation by a polynomial which is estimated simultaneously. Furthermore, neither the acquisition geometry nor any object properties besides the fact that the object is of spherical shape need to be known to find the center of the bead. It is shown by simulations that the algorithm estimates the center projection with an error of less than 1% of the detector pixel size in case of realistic noise levels and that the method is robust to the sphere material, sphere size, and acquisition parameters. A comparison to three reference methods using simulations and measurements indicates that the proposed method is an order of magnitude more accurate compared to these algorithms. The proposed method is an accurate algorithm to estimate the center of spherical markers in CT projections in the presence of polychromatic effects and noise.
A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.
Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping
2017-03-01
Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.
Sliding Window-Based Region of Interest Extraction for Finger Vein Images
Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2013-01-01
Region of Interest (ROI) extraction is a crucial step in an automatic finger vein recognition system. The aim of ROI extraction is to decide which part of the image is suitable for finger vein feature extraction. This paper proposes a finger vein ROI extraction method which is robust to finger displacement and rotation. First, we determine the middle line of the finger, which will be used to correct the image skew. Then, a sliding window is used to detect the phalangeal joints and further to ascertain the height of ROI. Last, for the corrective image with certain height, we will obtain the ROI by using the internal tangents of finger edges as the left and right boundary. The experimental results show that the proposed method can extract ROI more accurately and effectively compared with other methods, and thus improve the performance of finger vein identification system. Besides, to acquire the high quality finger vein image during the capture process, we propose eight criteria for finger vein capture from different aspects and these criteria should be helpful to some extent for finger vein capture. PMID:23507824
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.
Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A
2007-01-01
CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Peter C.; Schreibmann, Eduard; Roper, Justin
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less
Geometric correction method for 3d in-line X-ray phase contrast image reconstruction
2014-01-01
Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768
Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi
2013-06-01
Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.
NASA Astrophysics Data System (ADS)
Alessandri, S.; Monti, G.
2008-05-01
A simple procedure is proposed for the assessment of reinforced rectangular concrete columns under combined biaxial bending and axial loads and for the design of a correct amount of FRP-strengthening for underdesigned concrete sections. Approximate closed-form equations are developed based on the load contour method originally proposed by Bresler for reinforced concrete sections. The 3D failure surface is approximated along its contours, at a constant axial load, by means of equations given as the sum of the acting/resisting moment ratio in the directions of principal axes of the sections, raised to a power depending on the axial load, the steel reinforcement ratio, and the section shape. The method is extended to FRP-strengthened sections. Moreover, to make it possible to apply the load contour method in a more practical way, simple closed-form equations are developed for rectangular reinforced concrete sections with a two-way steel reinforcement and FRP strengthenings on each side. A comparison between the approach proposed and the fiber method (which is considered exact) shows that the simplified equations correctly represent the section interaction diagram.
Tanaka, Kenichi; Kajimoto, Tsuyoshi; Hayashi, Takahiro; Asanuma, Osamu; Hori, Masakazu; Kamo, Ken-Ichi; Sumida, Iori; Takahashi, Yutaka; Tateoka, Kunihiko; Bengua, Gerard; Sakata, Koh-Ichi; Endo, Satoru
2018-04-11
This study aims to demonstrate the feasibility of a method for estimating the strength of a moving brachytherapy source during implantation in a patient. Experiments were performed under the same conditions as in the actual treatment, except for one point that the source was not implanted into a patient. The brachytherapy source selected for this study was 125I with an air kerma strength of 0.332 U (μGym2h-1), and the detector used was a plastic scintillator with dimensions of 10 cm × 5 cm × 5 cm. A calibration factor to convert the counting rate of the detector to the source strength was measured and then the accuracy of the proposed method was investigated for a manually driven source. The accuracy was found to be under 10% when the shielding effect of additional needles for implantation at other positions was corrected, and about 30% when the shielding was not corrected. Even without shielding correction, the proposed method can detect dead/dropped source, implantation of a source with the wrong strength, and a mistake in the number of the sources implanted. Furthermore, when the correction was applied, the achieved accuracy came close to within 7% required to find the Oncoseed 6711 (125I seed with unintended strength among the commercially supplied values of 0.392, 0.462 and 0.533 U).
Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi
2016-12-01
Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.
Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor
2011-05-14
In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.
[Spectral scatter correction of coal samples based on quasi-linear local weighted method].
Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng
2014-07-01
The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.
WE-G-18A-03: Cone Artifacts Correction in Iterative Cone Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Folkerts, M; Jiang, S
Purpose: For iterative reconstruction (IR) in cone-beam CT (CBCT) imaging, data truncation along the superior-inferior (SI) direction causes severe cone artifacts in the reconstructed CBCT volume images. Not only does it reduce the effective SI coverage of the reconstructed volume, it also hinders the IR algorithm convergence. This is particular a problem for regularization based IR, where smoothing type regularization operations tend to propagate the artifacts to a large area. It is our purpose to develop a practical cone artifacts correction solution. Methods: We found it is the missing data residing in the truncated cone area that leads to inconsistencymore » between the calculated forward projections and measured projections. We overcome this problem by using FDK type reconstruction to estimate the missing data and design weighting factors to compensate the inconsistency caused by the missing data. We validate the proposed methods in our multi-GPU low-dose CBCT reconstruction system on multiple patients' datasets. Results: Compared to the FDK reconstruction with full datasets, while IR is able to reconstruct CBCT images using a subset of projection data, the severe cone artifacts degrade overall image quality. For head-neck case under a full-fan mode, 13 out of 80 slices are contaminated. It is even more severe in pelvis case under half-fan mode, where 36 out of 80 slices are affected, leading to inferior soft-tissue delineation. By applying the proposed method, the cone artifacts are effectively corrected, with a mean intensity difference decreased from ∼497 HU to ∼39HU for those contaminated slices. Conclusion: A practical and effective solution for cone artifacts correction is proposed and validated in CBCT IR algorithm. This study is supported in part by NIH (1R01CA154747-01)« less
NASA Astrophysics Data System (ADS)
Passow, Christian; Donner, Reik
2017-04-01
Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-01-01
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties. PMID:24919017
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-06-10
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties.
Gauss-Seidel Iterative Method as a Real-Time Pile-Up Solver of Scintillation Pulses
NASA Astrophysics Data System (ADS)
Novak, Roman; Vencelj, Matja¿
2009-12-01
The pile-up rejection in nuclear spectroscopy has been confronted recently by several pile-up correction schemes that compensate for distortions of the signal and subsequent energy spectra artifacts as the counting rate increases. We study here a real-time capability of the event-by-event correction method, which at the core translates to solving many sets of linear equations. Tight time limits and constrained front-end electronics resources make well-known direct solvers inappropriate. We propose a novel approach based on the Gauss-Seidel iterative method, which turns out to be a stable and cost-efficient solution to improve spectroscopic resolution in the front-end electronics. We show the method convergence properties for a class of matrices that emerge in calorimetric processing of scintillation detector signals and demonstrate the ability of the method to support the relevant resolutions. The sole iteration-based error component can be brought below the sliding window induced errors in a reasonable number of iteration steps, thus allowing real-time operation. An area-efficient hardware implementation is proposed that fully utilizes the method's inherent parallelism.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
Error correction in short time steps during the application of quantum gates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castro, L.A. de, E-mail: leonardo.castro@usp.br; Napolitano, R.D.J.
2016-04-15
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for themore » cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.« less
Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions
Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.
2010-01-01
Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256
Windschuh, Johannes; Siero, Jeroen C.W.; Zaiss, Moritz; Luijten, Peter R.; Klomp, Dennis W.J.; Hoogduin, Hans
2017-01-01
High field MRI is beneficial for chemical exchange saturation transfer (CEST) in terms of high SNR, CNR, and chemical shift dispersion. These advantages may, however, be counter‐balanced by the increased transmit field inhomogeneity normally associated with high field MRI. The relatively high sensitivity of the CEST contrast to B 1 inhomogeneity necessitates the development of correction methods, which is essential for the clinical translation of CEST. In this work, two B 1 correction algorithms for the most studied CEST effects, amide‐CEST and nuclear Overhauser enhancement (NOE), were analyzed. Both methods rely on fitting the multi‐pool Bloch‐McConnell equations to the densely sampled CEST spectra. In the first method, the correction is achieved by using a linear B 1 correction of the calculated amide and NOE CEST effects. The second method uses the Bloch‐McConnell fit parameters and the desired B 1 amplitude to recalculate the CEST spectra, followed by the calculation of B 1‐corrected amide and NOE CEST effects. Both algorithms were systematically studied in Bloch‐McConnell equations and in human data, and compared with the earlier proposed ideal interpolation‐based B 1 correction method. In the low B 1 regime of 0.15–0.50 μT (average power), a simple linear model was sufficient to mitigate B 1 inhomogeneity effects on a par with the interpolation B 1 correction, as demonstrated by a reduced correlation of the CEST contrast with B 1 in both the simulations and the experiments. PMID:28111824
Joint de-blurring and nonuniformity correction method for infrared microscopy imaging
NASA Astrophysics Data System (ADS)
Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban
2018-05-01
In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.
Testing the criterion for correct convergence in the complex Langevin method
NASA Astrophysics Data System (ADS)
Nagata, Keitaro; Nishimura, Jun; Shimasaki, Shinji
2018-05-01
Recently the complex Langevin method (CLM) has been attracting attention as a solution to the sign problem, which occurs in Monte Carlo calculations when the effective Boltzmann weight is not real positive. An undesirable feature of the method, however, was that it can happen in some parameter regions that the method yields wrong results even if the Langevin process reaches equilibrium without any problem. In our previous work, we proposed a practical criterion for correct convergence based on the probability distribution of the drift term that appears in the complex Langevin equation. Here we demonstrate the usefulness of this criterion in two solvable theories with many dynamical degrees of freedom, i.e., two-dimensional Yang-Mills theory with a complex coupling constant and the chiral Random Matrix Theory for finite density QCD, which were studied by the CLM before. Our criterion can indeed tell the parameter regions in which the CLM gives correct results.
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
A downscaling method for the assessment of local climate change
NASA Astrophysics Data System (ADS)
Bruno, E.; Portoghese, I.; Vurro, M.
2009-04-01
The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.
Automatic Analysis of Pronunciations for Children with Speech Sound Disorders.
Dudy, Shiran; Bedrick, Steven; Asgari, Meysam; Kain, Alexander
2018-07-01
Computer-Assisted Pronunciation Training (CAPT) systems aim to help a child learn the correct pronunciations of words. However, while there are many online commercial CAPT apps, there is no consensus among Speech Language Therapists (SLPs) or non-professionals about which CAPT systems, if any, work well. The prevailing assumption is that practicing with such programs is less reliable and thus does not provide the feedback necessary to allow children to improve their performance. The most common method for assessing pronunciation performance is the Goodness of Pronunciation (GOP) technique. Our paper proposes two new GOP techniques. We have found that pronunciation models that use explicit knowledge about error pronunciation patterns can lead to more accurate classification whether a phoneme was correctly pronounced or not. We evaluate the proposed pronunciation assessment methods against a baseline state of the art GOP approach, and show that the proposed techniques lead to classification performance that is more similar to that of a human expert.
Scatter correction method for x-ray CT using primary modulation: Phantom studies
Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei
2010-01-01
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy. PMID:20229902
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
NASA Astrophysics Data System (ADS)
Zhao, Shengmei; Wang, Le; Zou, Li; Gong, Longyan; Cheng, Weiwen; Zheng, Baoyu; Chen, Hanwu
2016-10-01
A free-space optical (FSO) communication link with multiplexed orbital angular momentum (OAM) modes has been demonstrated to largely enhance the system capacity without a corresponding increase in spectral bandwidth, but the performance of the link is unavoidably degraded by atmospheric turbulence (AT). In this paper, we propose a turbulence mitigation scheme to improve AT tolerance of the OAM-multiplexed FSO communication link using both channel coding and wavefront correction. In the scheme, we utilize a wavefront correction method to mitigate the phase distortion first, and then we use a channel code to further correct the errors in each OAM mode. The improvement of AT tolerance is discussed over the performance of the link with or without channel coding/wavefront correction. The results show that the bit error rate performance has been improved greatly. The detrimental effect of AT on the OAM-multiplexed FSO communication link could be removed by the proposed scheme even in the relatively strong turbulence regime, such as Cn2 = 3.6 ×10-14m - 2 / 3.
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
Image enhancement by spectral-error correction for dual-energy computed tomography.
Park, Kyung-Kook; Oh, Chang-Hyun; Akay, Metin
2011-01-01
Dual-energy CT (DECT) was reintroduced recently to use the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between low and high energy images or measurements, so that it is difficult to acquire accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, an image enhancement technique for DECT is proposed, based on the fact that the attenuation of a higher density material decreases more rapidly as X-ray energy increases. We define as spectral error the case when a pixel pair of low and high energy images deviates far from the expected attenuation trend. After analyzing the spectral-error sources of DECT images, we propose a DECT image enhancement method, which consists of three steps: water-reference offset correction, spectral-error correction, and anti-correlated noise reduction. It is the main idea of this work that makes spectral errors distributed like random noise over the true attenuation and suppressed by the well-known anti-correlated noise reduction. The proposed method suppressed noise of liver lesions and improved contrast between liver lesions and liver parenchyma in DECT contrast-enhanced abdominal images and their two-material decomposition.
On the weight of indels in genomic distances
2011-01-01
Background Classical approaches to compute the genomic distance are usually limited to genomes with the same content, without duplicated markers. However, differences in the gene content are frequently observed and can reflect important evolutionary aspects. A few polynomial time algorithms that include genome rearrangements, insertions and deletions (or substitutions) were already proposed. These methods often allow a block of contiguous markers to be inserted, deleted or substituted at once but result in distance functions that do not respect the triangular inequality and hence do not constitute metrics. Results In the present study we discuss the disruption of the triangular inequality in some of the available methods and give a framework to establish an efficient correction for two models recently proposed, one that includes insertions, deletions and double cut and join (DCJ) operations, and one that includes substitutions and DCJ operations. Conclusions We show that the proposed framework establishes the triangular inequality in both distances, by summing a surcharge on indel operations and on substitutions that depends only on the number of markers affected by these operations. This correction can be applied a posteriori, without interfering with the already available formulas to compute these distances. We claim that this correction leads to distances that are biologically more plausible. PMID:22151784
NASA Astrophysics Data System (ADS)
Lyu, Jiang-Tao; Zhou, Chen
2017-12-01
Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.
SIZE DISTRIBUTION OF SEA-SALT EMISSIONS AS A FUNCTION OF RELATIVE HUMIDITY
This note presents a straightforward method to correct sea-salt-emission particle-size distributions according to local relative humidity. The proposed method covers a wide range of relative humidity (0.45 to 0.99) and its derivation incorporates recent laboratory results on sea-...
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
An improved method to detect correct protein folds using partial clustering.
Zhou, Jianjun; Wishart, David S
2013-01-16
Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.
An improved method to detect correct protein folds using partial clustering
2013-01-01
Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835
Ciceri, E; Recchia, S; Dossi, C; Yang, L; Sturgeon, R E
2008-01-15
The development and validation of a method for the determination of mercury in sediments using a sector field inductively coupled plasma mass spectrometer (SF-ICP-MS) for detection is described. The utilization of isotope dilution (ID) calibration is shown to solve analytical problems related to matrix composition. Mass bias is corrected using an internal mass bias correction technique, validated against the traditional standard bracketing method. The overall analytical protocol is validated against NRCC PACS-2 marine sediment CRM. The estimated limit of detection is 12ng/g. The proposed procedure was applied to the analysis of a real sediment core sampled to a depth of 160m in Lake Como, where Hg concentrations ranged from 66 to 750ng/g.
Quantitative magnetic resonance spectroscopy at 3T based on the principle of reciprocity.
Zoelch, Niklaus; Hock, Andreas; Henning, Anke
2018-05-01
Quantification of magnetic resonance spectroscopy signals using the phantom replacement method requires an adequate correction of differences between the acquisition of the reference signal in the phantom and the measurement in vivo. Applying the principle of reciprocity, sensitivity differences can be corrected at low field strength by measuring the RF transmitter gain needed to obtain a certain flip angle in the measured volume. However, at higher field strength the transmit sensitivity may vary from the reception sensitivity, which leads to wrongly estimated concentrations. To address this issue, a quantification approach based on the principle of reciprocity for use at 3T is proposed and validated thoroughly. In this approach, the RF transmitter gain is determined automatically using a volume-selective power optimization and complemented with information from relative reception sensitivity maps derived from contrast-minimized images to correct differences in transmission and reception sensitivity. In this way, a reliable measure of the local sensitivity was obtained. The proposed method is used to derive in vivo concentrations of brain metabolites and tissue water in two studies with different coil sets in a total of 40 healthy volunteers. Resulting molar concentrations are compared with results using internal water referencing (IWR) and Electric REference To access In vivo Concentrations (ERETIC). With the proposed method, changes in coil loading and regional sensitivity due to B 1 inhomogeneities are successfully corrected, as demonstrated in phantom and in vivo measurements. For the tissue water content, coefficients of variation between 2% and 3.5% were obtained (0.6-1.4% in a single subject). The coefficients of variation of the three major metabolites ranged from 3.4-14.5%. In general, the derived concentrations agree well with values estimated with IWR. Hence, the presented method is a valuable alternative for IWR, without the need for additional hardware such as ERETIC and with potential advantages in diseased tissue. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kim, H.; Utsumi, N.
2017-12-01
This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.
NASA Astrophysics Data System (ADS)
Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi
This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.
Scene-based nonuniformity correction technique for infrared focal-plane arrays.
Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong
2009-04-20
A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.
Super-resolution pupil filtering for visual performance enhancement using adaptive optics
NASA Astrophysics Data System (ADS)
Zhao, Lina; Dai, Yun; Zhao, Junlei; Zhou, Xiaojun
2018-05-01
Ocular aberration correction can significantly improve visual function of the human eye. However, even under ideal aberration correction conditions, pupil diffraction restricts the resolution of retinal images. Pupil filtering is a simple super-resolution (SR) method that can overcome this diffraction barrier. In this study, a 145-element piezoelectric deformable mirror was used as a pupil phase filter because of its programmability and high fitting accuracy. Continuous phase-only filters were designed based on Zernike polynomial series and fitted through closed-loop adaptive optics. SR results were validated using double-pass point spread function images. Contrast sensitivity was further assessed to verify the SR effect on visual function. An F-test was conducted for nested models to statistically compare different CSFs. These results indicated CSFs for the proposed SR filter were significantly higher than the diffraction correction (p < 0.05). As such, the proposed filter design could provide useful guidance for supernormal vision optical correction of the human eye.
"ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"
DOE Office of Scientific and Technical Information (OSTI.GOV)
SANTHI, NANDAKISHORE
We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less
NASA Astrophysics Data System (ADS)
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maloney, J. A.; Morozov, V. S.; Derbenev, Ya. S.
Muon colliders have been proposed for the next generation of particle accelerators that study high-energy physics at the energy and intensity frontiers. In this paper we study a possible implementation of muon ionization cooling, Parametric-resonance Ionization Cooling (PIC), in the twin helix channel. The resonant cooling method of PIC offers the potential to reduce emittance beyond that achievable with ionization cooling with ordinary magnetic focusing. We examine optimization of a variety of parameters, study the nonlinear dynamics in the twin helix channel and consider possible methods of aberration correction.
A library least-squares approach for scatter correction in gamma-ray tomography
NASA Astrophysics Data System (ADS)
Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro
2015-03-01
Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
Lee, Wen-Chung
2014-02-05
The randomized controlled study is the gold-standard research method in biomedicine. In contrast, the validity of a (nonrandomized) observational study is often questioned because of unknown/unmeasured factors, which may have confounding and/or effect-modifying potential. In this paper, the author proposes a perturbation test to detect the bias of unmeasured factors and a perturbation adjustment to correct for such bias. The proposed method circumvents the problem of measuring unknowns by collecting the perturbations of unmeasured factors instead. Specifically, a perturbation is a variable that is readily available (or can be measured easily) and is potentially associated, though perhaps only very weakly, with unmeasured factors. The author conducted extensive computer simulations to provide a proof of concept. Computer simulations show that, as the number of perturbation variables increases from data mining, the power of the perturbation test increased progressively, up to nearly 100%. In addition, after the perturbation adjustment, the bias decreased progressively, down to nearly 0%. The data-mining perturbation analysis described here is recommended for use in detecting and correcting the bias of unmeasured factors in observational studies.
Structured-Light Based 3d Laser Scanning of Semi-Submerged Structures
NASA Astrophysics Data System (ADS)
van der Lucht, J.; Bleier, M.; Leutert, F.; Schilling, K.; Nüchter, A.
2018-05-01
In this work we look at 3D acquisition of semi-submerged structures with a triangulation based underwater laser scanning system. The motivation is that we want to simultaneously capture data above and below water to create a consistent model without any gaps. The employed structured light scanner consist of a machine vision camera and a green line laser. In order to reconstruct precise surface models of the object it is necessary to model and correct for the refraction of the laser line and camera rays at the water-air boundary. We derive a geometric model for the refraction at the air-water interface and propose a method for correcting the scans. Furthermore, we show how the water surface is directly estimated from sensor data. The approach is verified using scans captured with an industrial manipulator to achieve reproducible scanner trajectories with different incident angles. We show that the proposed method is effective for refractive correction and that it can be applied directly to the raw sensor data without requiring any external markers or targets.
Correction of amplitude-phase distortion for polarimetric active radar calibrator
NASA Astrophysics Data System (ADS)
Lin, Jianzhi; Li, Weixing; Zhang, Yue; Chen, Zengping
2015-01-01
The polarimetric active radar calibrator (PARC) is extensively used as an external test target for system distortion compensation and polarimetric calibration for the high-resolution polarimetric radar. However, the signal undergoes distortion in the PARC, affecting the effectiveness of the compensation and the calibration. The system distortion compensation resulting from the distortion of the amplitude and phase in the PARC was analyzed based on the "method of paired echoes." Then the correction method was proposed, which separated the ideal signals from the distorted signals. Experiments were carried on real radar data, and the experimental results were in good agreement with the theoretical analysis. After the correction, the PARC can be better used as an external test target for the system distortion compensation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.
2011-01-01
This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…
Truong, Trong-Kha; Guidon, Arnaud
2014-01-01
Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457
40 CFR 22.19 - Prehearing information exchange; prehearing conference; other discovery.
Code of Federal Regulations, 2010 CFR
2010-07-01
... method of discovery sought, provide the proposed discovery instruments, and describe in detail the nature... finding that: (i) The information sought cannot reasonably be obtained by alternative methods of discovery... promptly supplement or correct the exchange when the party learns that the information exchanged or...
77 FR 2669 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-19
... methods and would inspect additional areas, and corrective actions if necessary. This proposed AD would... and 11.45, by any of the following methods: Federal eRulemaking Portal: Go to http://www.regulations...-mill cracking in the shear wrinkle areas Inspecting for vertical chem-mill cracking at specified...
Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data
NASA Astrophysics Data System (ADS)
Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus
The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.
Non-parametric combination and related permutation tests for neuroimaging.
Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E
2016-04-01
In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Fast polarimetric dehazing method for visibility enhancement in HSI colour space
NASA Astrophysics Data System (ADS)
Zhang, Wenfei; Liang, Jian; Ren, Liyong; Ju, Haijuan; Bai, Zhaofeng; Wu, Zhaoxin
2017-09-01
Image haze removal has attracted much attention in optics and computer vision fields in recent years due to its wide applications. In particular, the fast and real-time dehazing methods are of significance. In this paper, we propose a fast dehazing method in hue, saturation and intensity colour space based on the polarimetric imaging technique. We implement the polarimetric dehazing method in the intensity channel, and the colour distortion of the image is corrected using the white patch retinex method. This method not only reserves the detailed information restoration capacity, but also improves the efficiency of the polarimetric dehazing method. Comparison studies with state of the art methods demonstrate that the proposed method obtains equal or better quality results and moreover the implementation is much faster. The proposed method is promising in real-time image haze removal and video haze removal applications.
A new compound control method for sine-on-random mixed vibration test
NASA Astrophysics Data System (ADS)
Zhang, Buyun; Wang, Ruochen; Zeng, Falin
2017-09-01
Vibration environmental test (VET) is one of the important and effective methods to provide supports for the strength design, reliability and durability test of mechanical products. A new separation control strategy was proposed to apply in multiple-input multiple-output (MIMO) sine on random (SOR) mixed mode vibration test, which is the advanced and intensive test type of VET. As the key problem of the strategy, correlation integral method was applied to separate the mixed signals which included random and sinusoidal components. The feedback control formula of MIMO linear random vibration system was systematically deduced in frequency domain, and Jacobi control algorithm was proposed in view of the elements, such as self-spectrum, coherence, and phase of power spectral density (PSD) matrix. Based on the excessive correction of excitation in sine vibration test, compression factor was introduced to reduce the excitation correction, avoiding the destruction to vibration table or other devices. The two methods were synthesized to be applied in MIMO SOR vibration test system. In the final, verification test system with the vibration of a cantilever beam as the control object was established to verify the reliability and effectiveness of the methods proposed in the paper. The test results show that the exceeding values can be controlled in the tolerance range of references accurately, and the method can supply theory and application supports for mechanical engineering.
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data
Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2015-01-01
DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984
[A correction method of baseline drift of discrete spectrum of NIR].
Hu, Ai-Qin; Yuan, Hong-Fu; Song, Chun-Feng; Li, Xiao-Yu
2014-10-01
In the present paper, a new correction method of baseline drift of discrete spectrum is proposed by combination of cubic spline interpolation and first order derivative. A fitting spectrum is constructed by cubic spline interpolation, using the datum in discrete spectrum as interpolation nodes. The fitting spectrum is differentiable. First order derivative is applied to the fitting spectrum to calculate derivative spectrum. The spectral wavelengths which are the same as the original discrete spectrum were taken out from the derivative spectrum to constitute the first derivative spectra of the discrete spectra, thereby to correct the baseline drift of the discrete spectra. The effects of the new method were demonstrated by comparison of the performances of multivariate models built using original spectra, direct differential spectra and the spectra pretreated by the new method. The results show that negative effects on the performance of multivariate model caused by baseline drift of discrete spectra can be effectively eliminated by the new method.
Segmentation-free empirical beam hardening correction for CT.
Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc
2015-02-01
The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
NASA Astrophysics Data System (ADS)
Hubert, Maxime; Pacureanu, Alexandra; Guilloud, Cyril; Yang, Yang; da Silva, Julio C.; Laurencin, Jerome; Lefebvre-Joud, Florence; Cloetens, Peter
2018-05-01
In X-ray tomography, ring-shaped artifacts present in the reconstructed slices are an inherent problem degrading the global image quality and hindering the extraction of quantitative information. To overcome this issue, we propose a strategy for suppression of ring artifacts originating from the coherent mixing of the incident wave and the object. We discuss the limits of validity of the empty beam correction in the framework of a simple formalism. We then deduce a correction method based on two-dimensional random sample displacement, with minimal cost in terms of spatial resolution, acquisition, and processing time. The method is demonstrated on bone tissue and on a hydrogen electrode of a ceramic-metallic solid oxide cell. Compared to the standard empty beam correction, we obtain high quality nanotomography images revealing detailed object features. The resulting absence of artifacts allows straightforward segmentation and posterior quantification of the data.
None, None
2016-11-21
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D; Gach, H; Li, H
Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A bodymore » mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and clinical applications in radiotherapy. Authors have received research grants from ViewRay and Varian.« less
Correction of MRI-induced geometric distortions in whole-body small animal PET-MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frohwein, Lynn J., E-mail: frohwein@uni-muenster.de; Schäfers, Klaus P.; Hoerr, Verena
Purpose: The fusion of positron emission tomography (PET) and magnetic resonance imaging (MRI) data can be a challenging task in whole-body PET-MRI. The quality of the registration between these two modalities in large field-of-views (FOV) is often degraded by geometric distortions of the MRI data. The distortions at the edges of large FOVs mainly originate from MRI gradient nonlinearities. This work describes a method to measure and correct for these kind of geometric distortions in small animal MRI scanners to improve the registration accuracy of PET and MRI data. Methods: The authors have developed a geometric phantom which allows themore » measurement of geometric distortions in all spatial axes via control points. These control points are detected semiautomatically in both PET and MRI data with a subpixel accuracy. The spatial transformation between PET and MRI data is determined with these control points via 3D thin-plate splines (3D TPS). The transformation derived from the 3D TPS is finally applied to real MRI mouse data, which were acquired with the same scan parameters used in the phantom data acquisitions. Additionally, the influence of the phantom material on the homogeneity of the magnetic field is determined via field mapping. Results: The spatial shift according to the magnetic field homogeneity caused by the phantom material was determined to a mean of 0.1 mm. The results of the correction show that distortion with a maximum error of 4 mm could be reduced to less than 1 mm with the proposed correction method. Furthermore, the control point-based registration of PET and MRI data showed improved congruence after correction. Conclusions: The developed phantom has been shown to have no considerable negative effect on the homogeneity of the magnetic field. The proposed method yields an appropriate correction of the measured MRI distortion and is able to improve the PET and MRI registration. Furthermore, the method is applicable to whole-body small animal imaging routines including different standard MRI sequences.« less
NASA Astrophysics Data System (ADS)
Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.
In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with aggregated ground measurements which showed a very good correlation of 0.96 in all four spectral bands (i.e. green, red, NIR and SWIR). In order to quantify the accuracy of the proposed method in the estimation of the surface reflectance, the root mean square error (RMSE) associated to the proposed method was evaluated. The analysis of the ground measured versus retrieved AWiFS reflectance yielded smaller RMSE values in case of all four spectral bands. EOS TERRA/AQUA MODIS derived AOD exhibited very good correlation of 0.92 and the data sets provides an effective means for carrying out atmospheric corrections in an operational way. Keywords: Atmospheric correction, 6S code, MODIS, Spectroradiometer, Sun-Photometer
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
A Web service substitution method based on service cluster nets
NASA Astrophysics Data System (ADS)
Du, YuYue; Gai, JunJing; Zhou, MengChu
2017-11-01
Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.
NASA Astrophysics Data System (ADS)
Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing
2016-11-01
The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.
NASA Astrophysics Data System (ADS)
Ge, Zhouyang; Loiseau, Jean-Christophe; Tammisola, Outi; Brandt, Luca
2018-01-01
Aiming for the simulation of colloidal droplets in microfluidic devices, we present here a numerical method for two-fluid systems subject to surface tension and depletion forces among the suspended droplets. The algorithm is based on an efficient solver for the incompressible two-phase Navier-Stokes equations, and uses a mass-conserving level set method to capture the fluid interface. The four novel ingredients proposed here are, firstly, an interface-correction level set (ICLS) method; global mass conservation is achieved by performing an additional advection near the interface, with a correction velocity obtained by locally solving an algebraic equation, which is easy to implement in both 2D and 3D. Secondly, we report a second-order accurate geometric estimation of the curvature at the interface and, thirdly, the combination of the ghost fluid method with the fast pressure-correction approach enabling an accurate and fast computation even for large density contrasts. Finally, we derive a hydrodynamic model for the interaction forces induced by depletion of surfactant micelles and combine it with a multiple level set approach to study short-range interactions among droplets in the presence of attracting forces.
Technical devices in otoplasty to obtain a natural appearance.
Zaoli, G
1997-07-01
This article outlines the importance given by plastic surgeons to correcting the external ear malformations. The objective is to create ears with a normal appearance. Despite more than 200 methods proposed by different authors, a procedure for otoplasty that gives constant and lasting surgical results has not yet been found. Following some critical remarks, both anatomical and surgical, the author proposes a method that, in his experience, gives good results, more natural and reliable, and is free from failures.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
Whole-heart coronary MRA with 3D affine motion correction using 3D image-based navigation.
Henningsson, Markus; Prieto, Claudia; Chiribiri, Amedeo; Vaillant, Ghislain; Razavi, Reza; Botnar, René M
2014-01-01
Robust motion correction is necessary to minimize respiratory motion artefacts in coronary MR angiography (CMRA). The state-of-the-art method uses a 1D feet-head translational motion correction approach, and data acquisition is limited to a small window in the respiratory cycle, which prolongs the scan by a factor of 2-3. The purpose of this work was to implement 3D affine motion correction for Cartesian whole-heart CMRA using a 3D navigator (3D-NAV) to allow for data acquisition throughout the whole respiratory cycle. 3D affine transformations for different respiratory states (bins) were estimated by using 3D-NAV image acquisitions which were acquired during the startup profiles of a steady-state free precession sequence. The calculated 3D affine transformations were applied to the corresponding high-resolution Cartesian image acquisition which had been similarly binned, to correct for respiratory motion between bins. Quantitative and qualitative comparisons showed no statistical difference between images acquired with the proposed method and the reference method using a diaphragmatic navigator with a narrow gating window. We demonstrate that 3D-NAV and 3D affine correction can be used to acquire Cartesian whole-heart 3D coronary artery images with 100% scan efficiency with similar image quality as with the state-of-the-art gated and corrected method with approximately 50% scan efficiency. Copyright © 2013 Wiley Periodicals, Inc.
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
4D numerical observer for lesion detection in respiratory-gated PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorsakul, Auranuch; Li, Quanzheng; Ouyang, Jinsong
2014-10-15
Purpose: Respiratory-gated positron emission tomography (PET)/computed tomography protocols reduce lesion smearing and improve lesion detection through a synchronized acquisition of emission data. However, an objective assessment of image quality of the improvement gained from respiratory-gated PET is mainly limited to a three-dimensional (3D) approach. This work proposes a 4D numerical observer that incorporates both spatial and temporal informations for detection tasks in pulmonary oncology. Methods: The authors propose a 4D numerical observer constructed with a 3D channelized Hotelling observer for the spatial domain followed by a Hotelling observer for the temporal domain. Realistic {sup 18}F-fluorodeoxyglucose activity distributions were simulated usingmore » a 4D extended cardiac torso anthropomorphic phantom including 12 spherical lesions at different anatomical locations (lower, upper, anterior, and posterior) within the lungs. Simulated data based on Monte Carlo simulation were obtained using GEANT4 application for tomographic emission (GATE). Fifty noise realizations of six respiratory-gated PET frames were simulated by GATE using a model of the Siemens Biograph mMR scanner geometry. PET sinograms of the thorax background and pulmonary lesions that were simulated separately were merged to generate different conditions of the lesions to the background (e.g., lesion contrast and motion). A conventional ordered subset expectation maximization (OSEM) reconstruction (5 iterations and 6 subsets) was used to obtain: (1) gated, (2) nongated, and (3) motion-corrected image volumes (a total of 3200 subimage volumes: 2400 gated, 400 nongated, and 400 motion-corrected). Lesion-detection signal-to-noise ratios (SNRs) were measured in different lesion-to-background contrast levels (3.5, 8.0, 9.0, and 20.0), lesion diameters (10.0, 13.0, and 16.0 mm), and respiratory motion displacements (17.6–31.3 mm). The proposed 4D numerical observer applied on multiple-gated images was compared to the conventional 3D approach applied on the nongated and motion-corrected images. Results: On average, the proposed 4D numerical observer improved the detection SNR by 48.6% (p < 0.005), whereas the 3D methods on motion-corrected images improved by 31.0% (p < 0.005) as compared to the nongated method. For all different conditions of the lesions, the relative SNR measurement (Gain = SNR{sub Observed}/SNR{sub Nongated}) of the 4D method was significantly higher than one from the motion-corrected 3D method by 13.8% (p < 0.02), where Gain{sub 4D} was 1.49 ± 0.21 and Gain{sub 3D} was 1.31 ± 0.15. For the lesion with the highest amplitude of motion, the 4D numerical observer yielded the highest observer-performance improvement (176%). For the lesion undergoing the smallest motion amplitude, the 4D method provided superior lesion detectability compared with the 3D method, which provided a detection SNR close to the nongated method. The investigation on a structure of the 4D numerical observer showed that a Laguerre–Gaussian channel matrix with a volumetric 3D function yielded higher lesion-detection performance than one with a 2D-stack-channelized function, whereas a different kind of channels that have the ability to mimic the human visual system, i.e., difference-of-Gaussian, showed similar performance in detecting uniform and spherical lesions. The investigation of the detection performance when increasing noise levels yielded decreasing detection SNR by 27.6% and 41.5% for the nongated and gated methods, respectively. The investigation of lesion contrast and diameter showed that the proposed 4D observer preserved the linearity property of an optimal-linear observer while the motion was present. Furthermore, the investigation of the iteration and subset numbers of the OSEM algorithm demonstrated that these parameters had impact on the lesion detectability and the selection of the optimal parameters could provide the maximum lesion-detection performance. The proposed 4D numerical observer outperformed the other observers for the lesion-detection task in various lesion conditions and motions. Conclusions: The 4D numerical observer shows substantial improvement in lesion detectability over the 3D observer method. The proposed 4D approach could potentially provide a more reliable objective assessment of the impact of respiratory-gated PET improvement for lesion-detection tasks. On the other hand, the 4D approach may be used as an upper bound to investigate the performance of the motion correction method. In future work, the authors will validate the proposed 4D approach on clinical data for detection tasks in pulmonary oncology.« less
Post-processing through linear regression
NASA Astrophysics Data System (ADS)
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.
2012-01-15
Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of themore » dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.« less
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.
2017-07-01
We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.
NASA Astrophysics Data System (ADS)
Carrea, Dario; Abellan, Antonio; Humair, Florian; Matasci, Battista; Derron, Marc-Henri; Jaboyedoff, Michel
2016-03-01
Ground-based LiDAR has been traditionally used for surveying purposes via 3D point clouds. In addition to XYZ coordinates, an intensity value is also recorded by LiDAR devices. The intensity of the backscattered signal can be a significant source of information for various applications in geosciences. Previous attempts to account for the scattering of the laser signal are usually modelled using a perfect diffuse reflection. Nevertheless, experience on natural outcrops shows that rock surfaces do not behave as perfect diffuse reflectors. The geometry (or relief) of the scanned surfaces plays a major role in the recorded intensity values. Our study proposes a new terrestrial LiDAR intensity correction, which takes into consideration the range, the incidence angle and the geometry of the scanned surfaces. The proposed correction equation combines the classical radar equation for LiDAR with the bidirectional reflectance distribution function of the Oren-Nayar model. It is based on the idea that the surface geometry can be modelled by a relief of multiple micro-facets. This model is constrained by only one tuning parameter: the standard deviation of the slope angle distribution (σslope) of micro-facets. Firstly, a series of tests have been carried out in laboratory conditions on a 2 m2 board covered by black/white matte paper (perfect diffuse reflector) and scanned at different ranges and incidence angles. Secondly, other tests were carried out on rock blocks of different lithologies and surface conditions. Those tests demonstrated that the non-perfect diffuse reflectance of rock surfaces can be practically handled by the proposed correction method. Finally, the intensity correction method was applied to a real case study, with two scans of the carbonate rock outcrop of the Dents-du-Midi (Swiss Alps), to improve the lithological identification for geological mapping purposes. After correction, the intensity values are proportional to the intrinsic material reflectance and are independent from range, incidence angle and scanned surface geometry. The corrected intensity values significantly improve the material differentiation.
3D Markov Process for Traffic Flow Prediction in Real-Time.
Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi
2016-01-25
Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further.
3D Markov Process for Traffic Flow Prediction in Real-Time
Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi
2016-01-01
Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025
Analysis of Power System Low Frequency Oscillation Based on Energy Shift Theory
NASA Astrophysics Data System (ADS)
Zhang, Junfeng; Zhang, Chunwang; Ma, Daqing
2018-01-01
In this paper, a new method for analyzing low-frequency oscillation between analytic areas based on energy coefficient is proposed. The concept of energy coefficient is proposed by constructing the energy function, and the low-frequency oscillation is analyzed according to the energy coefficient under the current operating conditions; meanwhile, the concept of model energy is proposed to analyze the energy exchange behavior between two generators. Not only does this method provide an explanation of low-frequency oscillation from the energy point of view, but also it helps further reveal the dynamic behavior of complex power systems. The case analysis of four-machine two-area and the power system of Jilin Power Grid proves the correctness and effectiveness of the proposed method in low-frequency oscillation analysis of power system.
Retinal image mosaicing using the radial distortion correction model
NASA Astrophysics Data System (ADS)
Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.
2008-03-01
Fundus camera imaging can be used to examine the retina to detect disorders. Similar to looking through a small keyhole into a large room, imaging the fundus with an ophthalmologic camera allows only a limited view at a time. Thus, the generation of a retinal montage using multiple images has the potential to increase diagnostic accuracy by providing larger field of view. A method of mosaicing multiple retinal images using the radial distortion correction (RADIC) model is proposed in this paper. Our method determines the inter-image connectivity by detecting feature correspondences. The connectivity information is converted to a tree structure that describes the spatial relationships between the reference and target images for pairwise registration. The montage is generated by cascading pairwise registration scheme starting from the anchor image downward through the connectivity tree hierarchy. The RADIC model corrects the radial distortion that is due to the spherical-to-planar projection during retinal imaging. Therefore, after radial distortion correction, individual images can be properly mapped onto a montage space by a linear geometric transformation, e.g. affine transform. Compared to the most existing montaging methods, our method is unique in that only a single registration per image is required because of the distortion correction property of RADIC model. As a final step, distance-weighted intensity blending is employed to correct the inter-image differences in illumination encountered when forming the montage. Visual inspection of the experimental results using three mosaicing cases shows our method can produce satisfactory montages.
Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long
2014-09-12
Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
Methodological considerations for global analysis of cellular FLIM/FRET measurements
NASA Astrophysics Data System (ADS)
Adbul Rahim, Nur Aida; Pelet, Serge; Kamm, Roger D.; So, Peter T. C.
2012-02-01
Global algorithms can improve the analysis of fluorescence energy transfer (FRET) measurement based on fluorescence lifetime microscopy. However, global analysis of FRET data is also susceptible to experimental artifacts. This work examines several common artifacts and suggests remedial experimental protocols. Specifically, we examined the accuracy of different methods for instrument response extraction and propose an adaptive method based on the mean lifetime of fluorescent proteins. We further examined the effects of image segmentation and a priori constraints on the accuracy of lifetime extraction. Methods to test the applicability of global analysis on cellular data are proposed and demonstrated. The accuracy of global fitting degrades with lower photon count. By systematically tracking the effect of the minimum photon count on lifetime and FRET prefactors when carrying out global analysis, we demonstrate a correction procedure to recover the correct FRET parameters, allowing us to obtain protein interaction information even in dim cellular regions with photon counts as low as 100 per decay curve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Zhiliang; Lin, Liangjie; Lin, Yanqin, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn
2014-09-29
In nuclear magnetic resonance (NMR) technique, it is of great necessity and importance to obtain high-resolution spectra, especially under inhomogeneous magnetic fields. In this study, a method based on partial homogeneity is proposed for retrieving high-resolution one-dimensional NMR spectra under inhomogeneous fields. Signals from series of small voxels, which characterize high resolution due to small sizes, are recorded simultaneously. Then, an inhomogeneity correction algorithm is developed based on pattern recognition to correct the influence brought by field inhomogeneity automatically, thus yielding high-resolution information. Experiments on chemical solutions and fish spawn were carried out to demonstrate the performance of the proposedmore » method. The proposed method serves as a single radiofrequency pulse high-resolution NMR spectroscopy under inhomogeneous fields and may provide an alternative of obtaining high-resolution spectra of in vivo living systems or chemical-reaction systems, where performances of conventional techniques are usually degenerated by field inhomogeneity.« less
Wei, Yu-Chun; Wang, Guo-Xiang; Cheng, Chun-Mei; Zhang, Jing; Sun, Xiao-Peng
2012-09-01
Suspended particle material is the main factor affecting remote sensing inversion of chlorophyll-a concentration (Chla) in turbidity water. According to the optical property of suspended material in water, the present paper proposed a linear baseline correction method to weaken the suspended particle contribution in the spectrum above turbidity water surface. The linear baseline was defined as the connecting line of reflectance from 450 to 750 nm, and baseline correction is that spectrum reflectance subtracts the baseline. Analysis result of field data in situ of Meiliangwan, Taihu Lake in April, 2011 and March, 2010 shows that spectrum linear baseline correction can improve the inversion precision of Chl a and produce the better model diagnoses. As the data in March, 2010, RMSE of band ratio model built by original spectrum is 4.11 mg x m(-3), and that built by spectrum baseline correction is 3.58 mg x m(-3). Meanwhile, residual distribution and homoscedasticity in the model built by baseline correction spectrum is improved obviously. The model RMSE of April, 2011 shows the similar result. The authors suggest that using linear baseline correction as the spectrum processing method to improve Chla inversion accuracy in turbidity water without algae bloom.
NASA Astrophysics Data System (ADS)
Lárraga-Gutiérrez, José Manuel
2015-08-01
Recently, Alfonso et al proposed a new formalism for the dosimetry of small and non-standard fields. The proposed new formalism is strongly based on the calculation of detector-specific beam correction factors by Monte Carlo simulation methods, which accounts for the difference in the response of the detector between the small and the machine specific reference field. The correct calculation of the detector-specific beam correction factors demands an accurate knowledge of the linear accelerator, detector geometry and composition materials. The present work shows that the field factors in water may be determined experimentally using the daisy chain correction method down to a field size of 1 cm × 1 cm for a specific set of detectors. The detectors studied were: three mini-ionization chambers (PTW-31014, PTW-31006, IBA-CC01), three silicon-based diodes (PTW-60018, IBA-SFD and IBA-PFD) and one synthetic diamond detector (PTW-60019). Monte Carlo simulations and experimental measurements were performed for a 6 MV photon beam at 10 cm depth in water with a source-to-axis distance of 100 cm. The results show that the differences between the experimental and Monte Carlo calculated field factors are less than 0.5%—with the exception of the IBA-PFD—for field sizes between 1.5 cm × 1.5 cm and 5 cm × 5 cm. For the 1 cm × 1 cm field size, the differences are within 2%. By using the daisy chain correction method, it is possible to determine measured field factors in water. The results suggest that the daisy chain correction method is not suitable for measurements performed with the IBA-PFD detector. The latter is due to the presence of tungsten powder in the detector encapsulation material. The use of Monte Carlo calculated k{{Q\\text{clin}},{{Q}\\text{msr}}}{{f\\text{clin}},{{f}\\text{msr}}} is encouraged for field sizes less than or equal to 1 cm × 1 cm for the dosimeters used in this work.
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
NASA Astrophysics Data System (ADS)
Oyama, Takuro; Ikabata, Yasuhiro; Seino, Junji; Nakai, Hiromi
2017-07-01
This Letter proposes a density functional treatment based on the two-component relativistic scheme at the infinite-order Douglas-Kroll-Hess (IODKH) level. The exchange-correlation energy and potential are calculated using the electron density based on the picture-change corrected density operator transformed by the IODKH method. Numerical assessments indicated that the picture-change uncorrected density functional terms generate significant errors, on the order of hartree for heavy atoms. The present scheme was found to reproduce the energetics in the four-component treatment with high accuracy.
NASA Astrophysics Data System (ADS)
Shimansky, R. V.; Poleshchuk, A. G.; Korolkov, V. P.; Cherkashin, V. V.
2017-05-01
This paper presents a method of improving the accuracy of a circular laser system in fabrication of large-diameter diffractive optical elements by means of a polar coordinate system and the results of their use. An algorithm for correcting positioning errors of a circular laser writing system developed at the Institute of Automation and Electrometry, SB RAS, is proposed and tested. Highprecision synthesized holograms fabricated by this method and the results of using these elements for testing the 6.5 m diameter aspheric mirror of the James Webb space telescope (JWST) are described..
Application of wavelet multi-resolution analysis for correction of seismic acceleration records
NASA Astrophysics Data System (ADS)
Ansari, Anooshiravan; Noorzad, Assadollah; Zare, Mehdi
2007-12-01
During an earthquake, many stations record the ground motion, but only a few of them could be corrected using conventional high-pass and low-pass filtering methods and the others were identified as highly contaminated by noise and as a result useless. There are two major problems associated with these noisy records. First, since the signal to noise ratio (S/N) is low, it is not possible to discriminate between the original signal and noise either in the frequency domain or in the time domain. Consequently, it is not possible to cancel out noise using conventional filtering methods. The second problem is the non-stationary characteristics of the noise. In other words, in many cases the characteristics of the noise are varied over time and in these situations, it is not possible to apply frequency domain correction schemes. When correcting acceleration signals contaminated with high-level non-stationary noise, there is an important question whether it is possible to estimate the state of the noise in different bands of time and frequency. Wavelet multi-resolution analysis decomposes a signal into different time-frequency components, and besides introducing a suitable criterion for identification of the noise among each component, also provides the required mathematical tool for correction of highly noisy acceleration records. In this paper, the characteristics of the wavelet de-noising procedures are examined through the correction of selected real and synthetic acceleration time histories. It is concluded that this method provides a very flexible and efficient tool for the correction of very noisy and non-stationary records of ground acceleration. In addition, a two-step correction scheme is proposed for long period correction of the acceleration records. This method has the advantage of stable results in displacement time history and response spectrum.
Study of the integration of wind tunnel and computational methods for aerodynamic configurations
NASA Technical Reports Server (NTRS)
Browne, Lindsey E.; Ashby, Dale L.
1989-01-01
A study was conducted to determine the effectiveness of using a low-order panel code to estimate wind tunnel wall corrections. The corrections were found by two computations. The first computation included the test model and the surrounding wind tunnel walls, while in the second computation the wind tunnel walls were removed. The difference between the force and moment coefficients obtained by comparing these two cases allowed the determination of the wall corrections. The technique was verified by matching the test-section, wall-pressure signature from a wind tunnel test with the signature predicted by the panel code. To prove the viability of the technique, two cases were considered. The first was a two-dimensional high-lift wing with a flap that was tested in the 7- by 10-foot wind tunnel at NASA Ames Research Center. The second was a 1/32-scale model of the F/A-18 aircraft which was tested in the low-speed wind tunnel at San Diego State University. The panel code used was PMARC (Panel Method Ames Research Center). Results of this study indicate that the proposed wind tunnel wall correction method is comparable to other methods and that it also inherently includes the corrections due to model blockage and wing lift.
Grayscale imbalance correction in real-time phase measuring profilometry
NASA Astrophysics Data System (ADS)
Zhu, Lin; Cao, Yiping; He, Dawu; Chen, Cheng
2016-10-01
Grayscale imbalance correction in real-time phase measuring profilometry (RPMP) is proposed. In the RPMP, the sufficient information is obtained to reconstruct the 3D shape of the measured object in one over twenty-four of a second. Only one color fringe pattern whose R, G and B channels are coded as three sinusoidal phase-shifting gratings with an equivalent shifting phase of 2π/3 is sent to a flash memory on a specialized digital light projector (SDLP). And then the SDLP projects the fringe patterns in R, G and B channels sequentially onto the measured object in one over seventy-two of a second and meanwhile a monochrome CCD camera captures the corresponding deformed patterns synchronously with the SDLP. Because the deformed patterns from three color channels are captured at different time, the color crosstalk is avoided completely. But due to the monochrome CCD camera's different spectral sensitivity to R, G and B tricolor, there will be grayscale imbalance among these deformed patterns captured at R, G and B channels respectively which may result in increasing measuring errors or even failing to reconstruct the 3D shape. So a new grayscale imbalance correction method based on least square method is developed. The experimental results verify the feasibility of the proposed method.
Reduction of metal artifacts in x-ray CT images using a convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Yanbo; Chu, Ying; Yu, Hengyong
2017-09-01
Patients usually contain various metallic implants (e.g. dental fillings, prostheses), causing severe artifacts in the x-ray CT images. Although a large number of metal artifact reduction (MAR) methods have been proposed in the past four decades, MAR is still one of the major problems in clinical x-ray CT. In this work, we develop a convolutional neural network (CNN) based MAR framework, which combines the information from the original and corrected images to suppress artifacts. Before the MAR, we generate a group of data and train a CNN. First, we numerically simulate various metal artifacts cases and build a dataset, which includes metal-free images (used as references), metal-inserted images and various MAR methods corrected images. Then, ten thousands patches are extracted from the databased to train the metal artifact reduction CNN. In the MAR stage, the original image and two corrected images are stacked as a three-channel input image for CNN, and a CNN image is generated with less artifacts. The water equivalent regions in the CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the metal affected projections, followed by the FBP reconstruction. Experimental results demonstrate the superior metal artifact reduction capability of the proposed method to its competitors.
Removing flicker based on sparse color correspondences in old film restoration
NASA Astrophysics Data System (ADS)
Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran
2018-04-01
In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, L; Zhu, L; Vedantham, S
2016-06-15
Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulatemore » scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Shidahara, Miho; Thomas, Benjamin A; Okamura, Nobuyuki; Ibaraki, Masanobu; Matsubara, Keisuke; Oyama, Senri; Ishikawa, Yoichi; Watanuki, Shoichi; Iwata, Ren; Furumoto, Shozo; Tashiro, Manabu; Yanai, Kazuhiko; Gonda, Kohsuke; Watabe, Hiroshi
2017-08-01
To suppress partial volume effect (PVE) in brain PET, there have been many algorithms proposed. However, each methodology has different property due to its assumption and algorithms. Our aim of this study was to investigate the difference among partial volume correction (PVC) method for tau and amyloid PET study. We investigated two of the most commonly used PVC methods, Müller-Gärtner (MG) and geometric transfer matrix (GTM) and also other three methods for clinical tau and amyloid PET imaging. One healthy control (HC) and one Alzheimer's disease (AD) PET studies of both [ 18 F]THK5351 and [ 11 C]PIB were performed using a Eminence STARGATE scanner (Shimadzu Inc., Kyoto, Japan). All PET images were corrected for PVE by MG, GTM, Labbé (LABBE), Regional voxel-based (RBV), and Iterative Yang (IY) methods, with segmented or parcellated anatomical information processed by FreeSurfer, derived from individual MR images. PVC results of 5 algorithms were compared with the uncorrected data. In regions of high uptake of [ 18 F]THK5351 and [ 11 C]PIB, different PVCs demonstrated different SUVRs. The degree of difference between PVE uncorrected and corrected depends on not only PVC algorithm but also type of tracer and subject condition. Presented PVC methods are straight-forward to implement but the corrected images require careful interpretation as different methods result in different levels of recovery.
Singh, Harpreet; Maurya, Raj Kumar; Thakkar, Surbhi
2016-12-01
Complete transposition of teeth is a rather rare phenomenon. After correction of transposed and malaligned lateral incisor and canine, attainment of appropriate individual antagonistic tooth torque is indispensable, which many orthodontists consider to be a herculean task. Here, a novel method is proposed which demonstrates the use of Spec reverse torquing auxillary as an effective adjunctive aid in conjunction with pre-adjusted edgewise brackets.
Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.
Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori
2016-07-10
A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations.
A new frequency approach for light flicker evaluation in electric power systems
NASA Astrophysics Data System (ADS)
Feola, Luigi; Langella, Roberto; Testa, Alfredo
2015-12-01
In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.
Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita
2009-12-01
Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.
A bias-corrected estimator in multiple imputation for missing data.
Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki
2018-05-29
Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.
Cao, Yanpeng; Tisse, Christel-Loic
2014-02-01
In this Letter, we propose an efficient and accurate solution to remove temperature-dependent nonuniformity effects introduced by the imaging optics. This single-image-based approach computes optics-related fixed pattern noise (FPN) by fitting the derivatives of correction model to the gradient components, locally computed on an infrared image. A modified bilateral filtering algorithm is applied to local pixel output variations, so that the refined gradients are most likely caused by the nonuniformity associated with optics. The estimated bias field is subtracted from the raw infrared imagery to compensate the intensity variations caused by optics. The proposed method is fundamentally different from the existing nonuniformity correction (NUC) techniques developed for focal plane arrays (FPAs) and provides an essential image processing functionality to achieve completely shutterless NUC for uncooled long-wave infrared (LWIR) imaging systems.
Automatic cortical segmentation in the developing brain.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V
2007-01-01
The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).
New readout integrated circuit using continuous time fixed pattern noise correction
NASA Astrophysics Data System (ADS)
Dupont, Bertrand; Chammings, G.; Rapellin, G.; Mandier, C.; Tchagaspanian, M.; Dupont, Benoit; Peizerat, A.; Yon, J. J.
2008-04-01
LETI has been involved in IRFPA development since 1978; the design department (LETI/DCIS) has focused its work on new ROIC architecture since many years. The trend is to integrate advanced functions into the CMOS design to achieve cost efficient sensors production. Thermal imaging market is today more and more demanding of systems with instant ON capability and low power consumption. The purpose of this paper is to present the latest developments of fixed pattern noise continuous time correction. Several architectures are proposed, some are based on hardwired digital processing and some are purely analog. Both are using scene based algorithms. Moreover a new method is proposed for simultaneous correction of pixel offsets and sensitivities. In this scope, a new architecture of readout integrated circuit has been implemented; this architecture is developed with 0.18μm CMOS technology. The specification and the application of the ROIC are discussed in details.
75 FR 11502 - Schedule of Water Charges; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-11
... DELAWARE RIVER BASIN COMMISSION 18 CFR Part 410 Schedule of Water Charges; Correction AGENCY: Delaware River Basin Commission. ACTION: Proposed rule; correction. SUMMARY: This document corrects the... of water charges. This correction clarifies that the amended rates are proposed to take effect in two...
Broadband astigmatism-corrected spectrometer design using a toroidal lens and a special filter
NASA Astrophysics Data System (ADS)
Ge, Xianying; Chen, Siying; Zhang, Yinchao; Chen, He; Guo, Pan; Mu, Taotao; Yang, Jian; Bu, Zhichao
2015-01-01
In the paper, a method to obtain a broadband, astigmatism-corrected spectrometer based on the existing Czerny-Turner spectrometer is proposed. The theories of astigmatism correction using a toroidal lens and a special filter are described in detail. Performance comparisons of the modified spectrometer and the traditional spectrometer are also presented. Results show that with the new design the RMS spot radius in sagittal view is one-eightieth of that in the traditional spectrometer over a broadband spectral range from 300 to 700 nm, without changing or moving any optical elements in the traditional spectrometer.
Multi-pose facial correction based on Gaussian process with combined kernel function
NASA Astrophysics Data System (ADS)
Shi, Shuyan; Ji, Ruirui; Zhang, Fan
2018-04-01
In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.
Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-01-01
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-12-14
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.
Scene-based nonuniformity correction algorithm based on interframe registration.
Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao
2011-06-01
In this paper, we present a simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration. This method estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images to make any two detectors with the same scene produce the same output value. In this way, the accumulation of the registration error can be avoided and the NUC can be achieved. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of the proposed technique is thoroughly studied with infrared image sequences with simulated nonuniformity and infrared imagery with real nonuniformity. It shows a significantly fast and reliable fixed-pattern noise reduction and obtains an effective frame-by-frame adaptive estimation of each detector's gain and offset.
Li, Haitao; Ning, Xin; Li, Wenzhuo
2017-03-01
In order to improve the reliability and reduce power consumption of the high speed BLDC motor system, this paper presents a model free adaptive control (MFAC) based position sensorless drive with only a dc-link current sensor. The initial commutation points are obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. According to the commutation error caused by the low pass filter (LPF) and other factors, the relationship between commutation error angle and dc-link current is analyzed, a corresponding MFAC based control method is proposed, and the commutation error can be corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Generic distortion model for metrology under optical microscopes
NASA Astrophysics Data System (ADS)
Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng
2018-04-01
For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.
Optimized MLAA for quantitative non-TOF PET/MR of the brain
NASA Astrophysics Data System (ADS)
Benoit, Didier; Ladefoged, Claes N.; Rezaei, Ahmadreza; Keller, Sune H.; Andersen, Flemming L.; Højgaard, Liselotte; Hansen, Adam E.; Holm, Søren; Nuyts, Johan
2016-12-01
For quantitative tracer distribution in positron emission tomography, attenuation correction is essential. In a hybrid PET/CT system the CT images serve as a basis for generation of the attenuation map, but in PET/MR, the MR images do not have a similarly simple relationship with the attenuation map. Hence attenuation correction in PET/MR systems is more challenging. Typically either of two MR sequences are used: the Dixon or the ultra-short time echo (UTE) techniques. However these sequences have some well-known limitations. In this study, a reconstruction technique based on a modified and optimized non-TOF MLAA is proposed for PET/MR brain imaging. The idea is to tune the parameters of the MLTR applying some information from an attenuation image computed from the UTE sequences and a T1w MR image. In this MLTR algorithm, an {αj} parameter is introduced and optimized in order to drive the algorithm to a final attenuation map most consistent with the emission data. Because the non-TOF MLAA is used, a technique to reduce the cross-talk effect is proposed. In this study, the proposed algorithm is compared to the common reconstruction methods such as OSEM using a CT attenuation map, considered as the reference, and OSEM using the Dixon and UTE attenuation maps. To show the robustness and the reproducibility of the proposed algorithm, a set of 204 [18F]FDG patients, 35 [11C]PiB patients and 1 [18F]FET patient are used. The results show that by choosing an optimized value of {αj} in MLTR, the proposed algorithm improves the results compared to the standard MR-based attenuation correction methods (i.e. OSEM using the Dixon or the UTE attenuation maps), and the cross-talk and the scale problem are limited.
Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.
Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas
2009-03-01
Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-01-01
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-12-25
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.
Studer, S; Naef, R; Schärer, P
1997-12-01
Esthetically correct treatment of a localized alveolar ridge defect is a frequent prosthetic challenge. Such defects can be overcome not only by a variety of prosthetic means, but also by several periodontal surgical techniques, notably soft tissue augmentations. Preoperative classification of the localized alveolar ridge defect can be greatly useful in evaluating the prognosis and technical difficulties involved. A semiquantitative classification, dependent on the severity of vertical and horizontal dimensional loss, is proposed to supplement the recognized qualitative classification of a ridge defect. Various methods of soft tissue augmentation are evaluated, based on initial volumetric measurements. The roll flap technique is proposed when the problem is related to ridge quality (single-tooth defect with little horizontal and vertical loss). Larger defects in which a volumetric problem must be solved are corrected through the subepithelial connective tissue technique. Additional mucogingival problems (eg, insufficient gingival width, high frenum, gingival scarring, or tattoo) should not be corrected simultaneously with augmentation procedures. In these cases, the onlay transplant technique is favored.
Bjornerud, Atle; Sorensen, A Gregory; Mouridsen, Kim; Emblem, Kyrre E
2011-01-01
We present a novel contrast agent (CA) extravasation-correction method based on analysis of the tissue residue function for assessment of multiple hemodynamic parameters. The method enables semiquantitative determination of the transfer constant and can be used to distinguish between T1- and T2*-dominant extravasation effects, while being insensitive to variations in tissue mean transit time (MTT). Results in 101 patients with confirmed glioma suggest that leakage-corrected absolute cerebral blood volume (CBV) values obtained with the proposed method provide improved overall survival prediction compared with normalized CBV values combined with an established leakage-correction method. Using a standard gradient-echo echo-planar imaging sequence, ∼60% and 10% of tumors with detectable CA extravasation mainly exhibited T1- and T2*-dominant leakage effects, respectively. The remaining 30% of leaky tumors had mixed T1- and T2*-dominant effects. Using an MTT-sensitive correction method, our results show that CBV is underestimated when tumor MTT is significantly longer than MTT in the reference tissue. Furthermore, results from our simulations suggest that the relative contribution of T1- versus T2*-dominant extravasation effects is strongly dependent on the effective transverse relaxivity in the extravascular space and may thus be a potential marker for cellular integrity and tissue structure. PMID:21505483
Multi-Stage Target Tracking with Drift Correction and Position Prediction
NASA Astrophysics Data System (ADS)
Chen, Xin; Ren, Keyan; Hou, Yibin
2018-04-01
Most existing tracking methods are hard to combine accuracy and performance, and do not consider the shift between clarity and blur that often occurs. In this paper, we propound a multi-stage tracking framework with two particular modules: position prediction and corrective measure. We conduct tracking based on correlation filter with a corrective measure module to increase both performance and accuracy. Specifically, a convolutional network is used for solving the blur problem in realistic scene, training methodology that training dataset with blur images generated by the three blur algorithms. Then, we propose a position prediction module to reduce the computation cost and make tracker more capable of fast motion. Experimental result shows that our tracking method is more robust compared to others and more accurate on the benchmark sequences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bechtel Nevada
1998-09-30
This corrective action plan proposes the closure method for the area 9 unexploded Ordnance landfill, corrective action unit 453 located at the Tonopah Test Range. The area 9 UXO landfill consists of corrective action site no. 09-55-001-0952 and is comprised of three individual landfill cells designated as A9-1, A9-2, and A9-3. The three landfill cells received wastes from daily operations at area 9 and from range cleanups which were performed after weapons testing. Cell locations and contents were not well documented due to the unregulated disposal practices commonly associated with early landfill operations. However, site process knowledge indicates that themore » landfill cells were used for solid waste disposal, including disposal of UXO.« less
Singh, Anushikha; Dutta, Malay Kishore
2017-12-01
The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution
NASA Astrophysics Data System (ADS)
Tian, Y.; Rao, C. H.; Wei, K.
2008-10-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.
Subaperture correlation based digital adaptive optics for full field optical coherence tomography.
Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A
2013-05-06
This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.
Spatio-thermal depth correction of RGB-D sensors based on Gaussian processes in real-time
NASA Astrophysics Data System (ADS)
Heindl, Christoph; Pönitz, Thomas; Stübl, Gernot; Pichler, Andreas; Scharinger, Josef
2018-04-01
Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
NASA Astrophysics Data System (ADS)
Huang, Wen-Min; Mou, Chung-Yu; Chang, Cheng-Hung
2010-02-01
While the scattering phase for several one-dimensional potentials can be exactly derived, less is known in multi-dimensional quantum systems. This work provides a method to extend the one-dimensional phase knowledge to multi-dimensional quantization rules. The extension is illustrated in the example of Bogomolny's transfer operator method applied in two quantum wells bounded by step potentials of different heights. This generalized semiclassical method accurately determines the energy spectrum of the systems, which indicates the substantial role of the proposed phase correction. Theoretically, the result can be extended to other semiclassical methods, such as Gutzwiller trace formula, dynamical zeta functions, and semiclassical Landauer-Büttiker formula. In practice, this recipe enhances the applicability of semiclassical methods to multi-dimensional quantum systems bounded by general soft potentials.
Image registration assessment in radiotherapy image guidance based on control chart monitoring.
Xia, Wenyao; Breen, Stephen L
2018-04-01
Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.
NASA Astrophysics Data System (ADS)
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
MR-guided adaptive focusing of therapeutic ultrasound beams in the human head
Marsac, Laurent; Chauvet, Dorian; Larrat, Benoît; Pernot, Mathieu; Robert, B.; Fink, Mathias; Boch, Anne-Laure; Aubry, Jean-François; Tanter, Mickaël
2012-01-01
Purpose This study aims to demonstrate, using human cadavers the feasibility of energy-based adaptive focusing of ultrasonic waves using Magnetic Resonance Acoustic Radiation Force Imaging (MR-ARFI) in the framework of non-invasive transcranial High Intensity Focused Ultrasound (HIFU) therapy. Methods Energy-based adaptive focusing techniques were recently proposed in order to achieve aberration correction. We evaluate this method on a clinical brain HIFU system composed of 512 ultrasonic elements positioned inside a full body 1.5 T clinical Magnetic Resonance (MR) imaging system. Cadaver heads were mounted onto a clinical Leksell stereotactic frame. The ultrasonic wave intensity at the chosen location was indirectly estimated by the MR system measuring the local tissue displacement induced by the acoustic radiation force of the ultrasound (US) beams. For aberration correction, a set of spatially encoded ultrasonic waves was transmitted from the ultrasonic array and the resulting local displacements were estimated with the MR-ARFI sequence for each emitted beam. A non-iterative inversion process was then performed in order to estimate the spatial phase aberrations induced by the cadaver skull. The procedure was first evaluated and optimized in a calf brain using a numerical aberrator mimicking human skull aberrations. The full method was then demonstrated using a fresh human cadaver head. Results The corrected beam resulting from the direct inversion process was found to focus at the targeted location with an acoustic intensity 2.2 times higher than the conventional non corrected beam. In addition, this corrected beam was found to give an acoustic intensity 1.5 times higher than the focusing pattern obtained with an aberration correction using transcranial acoustic simulation based on X-ray computed tomography (CT) scans. Conclusion The proposed technique achieved near optimal focusing in an intact human head for the first time. These findings confirm the strong potential of energy-based adaptive focusing of transcranial ultrasonic beams for clinical applications. PMID:22320825
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
NASA Astrophysics Data System (ADS)
Zhang, Linna; Ding, Hongyan; Lin, Ling; Wang, Yimin; Guo, Xin
2017-12-01
A fiber is usually used as a probe in visible and near-infrared diffuse spectra measurement. However, the use of different fiber probes in the same measurement may cause data mismatch problems. Our group has researched the influence of the parameters of fiber probe, including the aperture angle, on the diffuse spectrum by a modified Monte Carlo model. To eliminate the influence of the aperture angle, we proposed a fitted equation of correction coefficient to correct its difference in practical range. However, we did not discuss the limitation of this method. In this work, we explored the collection efficiency in different optical environment with Monte Carlo simulation method, and find the suitable conditions-weak absorbing and strong scattering media, for the proposed collection efficiency. Furthermore, we tried to explain the stability of the collection efficiency in this condition. This work gives suitable conditions for the collection efficiency. The use of collection efficiency can help reduce the influence of different measurement systems and is also helpful to the model translation.
Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling
NASA Astrophysics Data System (ADS)
Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang
2018-04-01
Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, J; Lasio, G; Chen, S
2015-06-15
Purpose: To develop a CBCT HU correction method using a patient specific HU to mass density conversion curve based on a novel image registration and organ mapping method for head-and-neck radiation therapy. Methods: There are three steps to generate a patient specific CBCT HU to mass density conversion curve. First, we developed a novel robust image registration method based on sparseness analysis to register the planning CT (PCT) and the CBCT. Second, a novel organ mapping method was developed to transfer the organs at risk (OAR) contours from the PCT to the CBCT and corresponding mean HU values of eachmore » OAR were measured in both the PCT and CBCT volumes. Third, a set of PCT and CBCT HU to mass density conversion curves were created based on the mean HU values of OARs and the corresponding mass density of the OAR in the PCT. Then, we compared our proposed conversion curve with the traditional Catphan phantom based CBCT HU to mass density calibration curve. Both curves were input into the treatment planning system (TPS) for dose calculation. Last, the PTV and OAR doses, DVH and dose distributions of CBCT plans are compared to the original treatment plan. Results: One head-and-neck cases which contained a pair of PCT and CBCT was used. The dose differences between the PCT and CBCT plans using the proposed method are −1.33% for the mean PTV, 0.06% for PTV D95%, and −0.56% for the left neck. The dose differences between plans of PCT and CBCT corrected using the CATPhan based method are −4.39% for mean PTV, 4.07% for PTV D95%, and −2.01% for the left neck. Conclusion: The proposed CBCT HU correction method achieves better agreement with the original treatment plan compared to the traditional CATPhan based calibration method.« less
A New Shape Description Method Using Angular Radial Transform
NASA Astrophysics Data System (ADS)
Lee, Jong-Min; Kim, Whoi-Yul
Shape is one of the primary low-level image features in content-based image retrieval. In this paper we propose a new shape description method that consists of a rotationally invariant angular radial transform descriptor (IARTD). The IARTD is a feature vector that combines the magnitude and aligned phases of the angular radial transform (ART) coefficients. A phase correction scheme is employed to produce the aligned phase so that the IARTD is invariant to rotation. The distance between two IARTDs is defined by combining differences in the magnitudes and aligned phases. In an experiment using the MPEG-7 shape dataset, the proposed method outperforms existing methods; the average BEP of the proposed method is 57.69%, while the average BEPs of the invariant Zernike moments descriptor and the traditional ART are 41.64% and 36.51%, respectively.
Searching Remote Homology with Spectral Clustering with Symmetry in Neighborhood Cluster Kernels
Maulik, Ujjwal; Sarkar, Anasua
2013-01-01
Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of “recent” paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. Contact: sarkar@labri.fr. PMID:23457439
Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.
Maulik, Ujjwal; Sarkar, Anasua
2013-01-01
Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. sarkar@labri.fr.
NASA Astrophysics Data System (ADS)
Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin
2018-05-01
Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.