Sample records for correction algorithm improved

  1. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  2. Improved artificial bee colony algorithm for wavefront sensor-less system in free space optical communication

    NASA Astrophysics Data System (ADS)

    Niu, Chaojun; Han, Xiang'e.

    2015-10-01

    Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.

  3. Algorithm Updates for the Fourth SeaWiFS Data Reprocessing

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford, B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Robinson, Wayne D.; Feldman, Gene Carl; Bailey, Sean W.

    2003-01-01

    The efforts to improve the data quality for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data products have continued, following the third reprocessing of the global data set in May 2000. Analyses have been ongoing to address all aspects of the processing algorithms, particularly the calibration methodologies, atmospheric correction, and data flagging and masking. All proposed changes were subjected to rigorous testing, evaluation and validation. The results of these activities culminated in the fourth reprocessing, which was completed in July 2002. The algorithm changes, which were implemented for this reprocessing, are described in the chapters of this volume. Chapter 1 presents an overview of the activities leading up to the fourth reprocessing, and summarizes the effects of the changes. Chapter 2 describes the modifications to the on-orbit calibration, specifically the focal plane temperature correction and the temporal dependence. Chapter 3 describes the changes to the vicarious calibration, including the stray light correction to the Marine Optical Buoy (MOBY) data and improved data screening procedures. Chapter 4 describes improvements to the near-infrared (NIR) band correction algorithm. Chapter 5 describes changes to the atmospheric correction and the oceanic property retrieval algorithms, including out-of-band corrections, NIR noise reduction, and handling of unusual conditions. Chapter 6 describes various changes to the flags and masks, to increase the number of valid retrievals, improve the detection of the flag conditions, and add new flags. Chapter 7 describes modifications to the level-la and level-3 algorithms, to improve the navigation accuracy, correct certain types of spacecraft time anomalies, and correct a binning logic error. Chapter 8 describes the algorithm used to generate the SeaWiFS photosynthetically available radiation (PAR) product. Chapter 9 describes a coupled ocean-atmosphere model, which is used in one of the changes described in Chapter 4. Finally, Chapter 10 describes a comparison of results from the third and fourth reprocessings along the US. Northeast coast.

  4. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.

  5. Improved forest change detection with terrain illumination corrected landsat images

    USDA-ARS?s Scientific Manuscript database

    An illumination correction algorithm has been developed to improve the accuracy of forest change detection from Landsat reflectance data. This algorithm is based on an empirical rotation model and was tested on the Landsat imagery pair over Cherokee National Forest, Tennessee, Uinta-Wasatch-Cache N...

  6. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  7. Optimizing wavefront-guided corrections for highly aberrated eyes in the presence of registration uncertainty

    PubMed Central

    Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.

    2013-01-01

    Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512

  8. Nonuniformity correction for an infrared focal plane array based on diamond search block matching.

    PubMed

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.

  9. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  10. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  11. Solution for the nonuniformity correction of infrared focal plane arrays.

    PubMed

    Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao

    2005-05-20

    Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.

  12. Cardiac MRI in mice at 9.4 Tesla with a transmit-receive surface coil and a cardiac-tailored intensity-correction algorithm.

    PubMed

    Sosnovik, David E; Dai, Guangping; Nahrendorf, Matthias; Rosen, Bruce R; Seethamraju, Ravi

    2007-08-01

    To evaluate the use of a transmit-receive surface (TRS) coil and a cardiac-tailored intensity-correction algorithm for cardiac MRI in mice at 9.4 Tesla (9.4T). Fast low-angle shot (FLASH) cines, with and without delays alternating with nutations for tailored excitation (DANTE) tagging, were acquired in 13 mice. An intensity-correction algorithm was developed to compensate for the sensitivity profile of the surface coil, and was tailored to account for the unique distribution of noise and flow artifacts in cardiac MR images. Image quality was extremely high and allowed fine structures such as trabeculations, valve cusps, and coronary arteries to be clearly visualized. The tag lines created with the surface coil were also sharp and clearly visible. Application of the intensity-correction algorithm improved signal intensity, tissue contrast, and image quality even further. Importantly, the cardiac-tailored properties of the correction algorithm prevented noise and flow artifacts from being significantly amplified. The feasibility and value of cardiac MRI in mice with a TRS coil has been demonstrated. In addition, a cardiac-tailored intensity-correction algorithm has been developed and shown to improve image quality even further. The use of these techniques could produce significant potential benefits over a broad range of scanners, coil configurations, and field strengths. (c) 2007 Wiley-Liss, Inc.

  13. Filtering method of star control points for geometric correction of remote sensing image based on RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Xiangli; Yang, Jungang; Deng, Xinpu

    2018-04-01

    In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.

  14. Improved Algorithm For Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1989-01-01

    Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.

  15. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  16. An improved non-uniformity correction algorithm and its GPU parallel implementation

    NASA Astrophysics Data System (ADS)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  17. Assessment of a Bidirectional Reflectance Distribution Correction of Above-Water and Satellite Water-Leaving Radiance in Coastal Waters

    NASA Technical Reports Server (NTRS)

    Hlaing, Soe; Gilerson, Alexander; Harmal, Tristan; Tonizzo, Alberto; Weidemann, Alan; Arnone, Robert; Ahmed, Samir

    2012-01-01

    Water-leaving radiances, retrieved from in situ or satellite measurements, need to be corrected for the bidirectional properties of the measured light in order to standardize the data and make them comparable with each other. The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms specifically tuned for typical coastal waters and other case 2 conditions are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multispectral and hyperspectral radiometers, which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths, with average improvement of 2.4% over the spectral range. LISCO's time series data have also been used to evaluate improvements in match-up comparisons of Moderate Resolution Imaging Spectroradiometer satellite data when the proposed BRDF correction is used in lieu of the current algorithm. It is shown that the discrepancies between coincident in-situ sea-based and satellite data decreased by 3.15% with the use of the proposed algorithm.

  18. Application and assessment of a robust elastic motion correction algorithm to dynamic MRI.

    PubMed

    Herrmann, K-H; Wurdinger, S; Fischer, D R; Krumbein, I; Schmitt, M; Hermosillo, G; Chaudhuri, K; Krishnan, A; Salganicoff, M; Kaiser, W A; Reichenbach, J R

    2007-01-01

    The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).

  19. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    PubMed

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Correction of rotational distortion for catheter-based en face OCT and OCT angiography

    PubMed Central

    Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.

    2015-01-01

    We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133

  1. Research on correction algorithm of laser positioning system based on four quadrant detector

    NASA Astrophysics Data System (ADS)

    Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia

    2018-02-01

    This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.

  2. Development of a novel three-dimensional deformable mirror with removable influence functions for high precision wavefront correction in adaptive optics system

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Zhou, Chenlu; Gong, Mali; Ma, Xingkun; Bian, Qi

    2016-07-01

    Deformable mirror is a widely used wavefront corrector in adaptive optics system, especially in astronomical, image and laser optics. A new structure of DM-3D DM is proposed, which has removable actuators and can correct different aberrations with different actuator arrangements. A 3D DM consists of several reflection mirrors. Every mirror has a single actuator and is independent of each other. Two kinds of actuator arrangement algorithm are compared: random disturbance algorithm (RDA) and global arrangement algorithm (GAA). Correction effects of these two algorithms and comparison are analyzed through numerical simulation. The simulation results show that 3D DM with removable actuators can obviously improve the correction effects.

  3. An improved non-uniformity correction algorithm and its hardware implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong

    2017-09-01

    The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.

  4. "ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SANTHI, NANDAKISHORE

    We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less

  5. Assessment of a bidirectional reflectance distribution correction of above-water and satellite water-leaving radiance in coastal waters.

    PubMed

    Hlaing, Soe; Gilerson, Alexander; Harmel, Tristan; Tonizzo, Alberto; Weidemann, Alan; Arnone, Robert; Ahmed, Samir

    2012-01-10

    Water-leaving radiances, retrieved from in situ or satellite measurements, need to be corrected for the bidirectional properties of the measured light in order to standardize the data and make them comparable with each other. The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms specifically tuned for typical coastal waters and other case 2 conditions are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multispectral and hyperspectral radiometers, which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths, with average improvement of 2.4% over the spectral range. LISCO's time series data have also been used to evaluate improvements in match-up comparisons of Moderate Resolution Imaging Spectroradiometer satellite data when the proposed BRDF correction is used in lieu of the current algorithm. It is shown that the discrepancies between coincident in-situ sea-based and satellite data decreased by 3.15% with the use of the proposed algorithm. This confirms the advantages of the proposed model over the current one, demonstrating the need for a specific case 2 water BRDF correction algorithm as well as the feasibility of enhancing performance of current and future satellite ocean color remote sensing missions for monitoring of typical coastal waters. © 2012 Optical Society of America

  6. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  7. Improved multivariate polynomial factoring algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P.S.

    1978-10-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less

  8. The SEASAT altimeter wet tropospheric range correction revisited

    NASA Technical Reports Server (NTRS)

    Tapley, D. B.; Lundberg, J. B.; Born, G. H.

    1984-01-01

    An expanded set of radiosonde observations was used to calculate the wet tropospheric range correction for the brightness temperature measurements of the SEASAT scanning multichannel microwave radiometer (SMMR). The accuracy of the conventional algorithm for wet tropospheric range correction was evaluated. On the basis of the expanded observational data set, the algorithm was found to have a bias of about 1.0 cm, and a standard deviation 2.8 cm. In order to improve the algorithm, the exact linear, quadratic and logarithmic relationships between brightness temperatures and range corrections were determined. Various combinations of measurement parameters were used to reduce the standard deviation between SEASAT SMMR and radiosonde observations to about 2.1 cm. The performance of various range correction formulas is compared in a table.

  9. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  10. Hysteresis modeling of magnetic shape memory alloy actuator based on Krasnosel'skii-Pokrovskii model.

    PubMed

    Zhou, Miaolei; Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.

  11. Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model

    PubMed Central

    Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730

  12. An Enhanced MWR-Based Wet Tropospheric Correction for Sentinel-3: Inheritance from Past ESA Altimetry Missions

    NASA Astrophysics Data System (ADS)

    Lazaro, Clara; Fernandes, Joanna M.

    2015-12-01

    The GNSS-derived Path Delay (GPD) and the Data Combination (DComb) algorithms were developed by University of Porto (U.Porto), in the scope of different projects funded by ESA, to compute a continuous and improved wet tropospheric correction (WTC) for use in satellite altimetry. Both algorithms are mission independent and are based on a linear space-time objective analysis procedure that combines various wet path delay data sources. A new algorithm that gets the best of each aforementioned algorithm (GNSS-derived Path Delay Plus, GPD+) has been developed at U.Porto in the scope of SL_cci project, where the use of consistent and stable in time datasets is of major importance. The algorithm has been applied to the main eight altimetric missions (TOPEX/Poseidon, Jason-1, Jason-2, ERS-1, ERS-2, Envisat and CryoSat-2 and SARAL). Upcoming Sentinel-3 possesses a two-channel on-board radiometer similar to those that were deployed in ERS-1/2 and Envisat. Consequently, the fine-tuning of the GPD+ algorithm to these missions datasets shall enrich it, by increasing its capability to quickly deal with Sentinel-3 data. Foreseeing that the computation of an improved MWR-based WTC for use with Sentinel-3 data will be required, this study focuses on the results obtained for ERS-1/2 and Envisat missions, which are expected to give insight into the computation of this correction for the upcoming ESA altimetric mission. The various WTC corrections available for each mission (in general, the original correction derived from the on-board MWR, the model correction and the one derived from GPD+) are inter-compared either directly or using various sea level anomaly variance statistical analyses. Results show that the GPD+ algorithm is efficient in generating global and continuous datasets, corrected for land and ice contamination and spurious measurements of instrumental origin, with significant impacts on all ESA missions.

  13. Investigation of optical current transformer signal processing method based on an improved Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan

    2018-01-01

    This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.

  14. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  15. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  16. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    NASA Astrophysics Data System (ADS)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  17. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  18. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    NASA Astrophysics Data System (ADS)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  19. Validation of the Thematic Mapper radiometric and geometric correction algorithms

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1984-01-01

    The radiometric and geometric correction algorithms for Thematic Mapper are critical to subsequent successful information extraction. Earlier Landsat scanners, known as Multispectral Scanners, produce imagery which exhibits striping due to mismatching of detector gains and biases. Thematic Mapper exhibits the same phenomenon at three levels: detector-to-detector, scan-to-scan, and multiscan striping. The cause of these variations has been traced to variations in the dark current of the detectors. An alternative formulation has been tested and shown to be very satisfactory. Unfortunately, the Thematic Mapper detectors exhibit saturation effects suffered while viewing extensive cloud areas, and is not easily correctable. The geometric correction algorithm has been shown to be remarkably reliable. Only minor and modest improvements are indicated and shown to be effective.

  20. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  1. Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data

    NASA Technical Reports Server (NTRS)

    Song, S.; Moore, R. K.

    1996-01-01

    The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.

  2. Diagnostic Performance of a Novel Coronary CT Angiography Algorithm: Prospective Multicenter Validation of an Intracycle CT Motion Correction Algorithm for Diagnostic Accuracy.

    PubMed

    Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K

    2018-06-01

    Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.

  3. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  4. Pentacam Scheimpflug quantitative imaging of the crystalline lens and intraocular lens.

    PubMed

    Rosales, Patricia; Marcos, Susana

    2009-05-01

    To implement geometrical and optical distortion correction methods for anterior segment Scheimpflug images obtained with a commercially available system (Pentacam, Oculus Optikgeräte GmbH). Ray tracing algorithms were implemented to obtain corrected ocular surface geometry from the original images captured by the Pentacam's CCD camera. As details of the optical layout were not fully provided by the manufacturer, an iterative procedure (based on imaging of calibrated spheres) was developed to estimate the camera lens specifications. The correction procedure was tested on Scheimpflug images of a physical water cell model eye (with polymethylmethacrylate cornea and a commercial IOL of known dimensions) and of a normal human eye previously measured with a corrected optical and geometrical distortion Scheimpflug camera (Topcon SL-45 [Topcon Medical Systems Inc] from the Vrije University, Amsterdam, Holland). Uncorrected Scheimpflug images show flatter surfaces and thinner lenses than in reality. The application of geometrical and optical distortion correction algorithms improves the accuracy of the estimated anterior lens radii of curvature by 30% to 40% and of the estimated posterior lens by 50% to 100%. The average error in the retrieved radii was 0.37 and 0.46 mm for the anterior and posterior lens radii of curvature, respectively, and 0.048 mm for lens thickness. The Pentacam Scheimpflug system can be used to obtain quantitative information on the geometry of the crystalline lens, provided that geometrical and optical distortion correction algorithms are applied, within the accuracy of state-of-the art phakometry and biometry. The techniques could improve with exact knowledge of the technical specifications of the instrument, improved edge detection algorithms, consideration of aspheric and non-rotationally symmetrical surfaces, and introduction of a crystalline gradient index.

  5. Bidirectional Contrast agent leakage correction of dynamic susceptibility contrast (DSC)-MRI improves cerebral blood volume estimation and survival prediction in recurrent glioblastoma treated with bevacizumab.

    PubMed

    Leu, Kevin; Boxerman, Jerrold L; Lai, Albert; Nghiemphu, Phioanh L; Pope, Whitney B; Cloughesy, Timothy F; Ellingson, Benjamin M

    2016-11-01

    To evaluate a leakage correction algorithm for T 1 and T2* artifacts arising from contrast agent extravasation in dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI) that accounts for bidirectional contrast agent flux and compare relative cerebral blood volume (CBV) estimates and overall survival (OS) stratification from this model to those made with the unidirectional and uncorrected models in patients with recurrent glioblastoma (GBM). We determined median rCBV within contrast-enhancing tumor before and after bevacizumab treatment in patients (75 scans on 1.5T, 19 scans on 3.0T) with recurrent GBM without leakage correction and with application of the unidirectional and bidirectional leakage correction algorithms to determine whether rCBV stratifies OS. Decreased post-bevacizumab rCBV from baseline using the bidirectional leakage correction algorithm significantly correlated with longer OS (Cox, P = 0.01), whereas rCBV change using the unidirectional model (P = 0.43) or the uncorrected rCBV values (P = 0.28) did not. Estimates of rCBV computed with the two leakage correction algorithms differed on average by 14.9%. Accounting for T 1 and T2* leakage contamination in DSC-MRI using a two-compartment, bidirectional rather than unidirectional exchange model might improve post-bevacizumab survival stratification in patients with recurrent GBM. J. Magn. Reson. Imaging 2016;44:1229-1237. © 2016 International Society for Magnetic Resonance in Medicine.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  7. Model-based sensor-less wavefront aberration correction in optical coherence tomography.

    PubMed

    Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel

    2015-12-15

    Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.

  8. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  9. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  10. Atmospheric correction over case 2 waters with an iterative fitting algorithm: relative humidity effects.

    PubMed

    Land, P E; Haigh, J D

    1997-12-20

    In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.

  11. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    PubMed

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  12. Fast automatic correction of motion artifacts in shoulder MRI

    NASA Astrophysics Data System (ADS)

    Manduca, Armando; McGee, Kiaran P.; Welch, Edward B.; Felmlee, Joel P.; Ehman, Richard L.

    2001-07-01

    The ability to correct certain types of MR images for motion artifacts from the raw data alone by iterative optimization of an image quality measure has recently been demonstrated. In the first study on a large data set of clinical images, we showed that such an autocorrection technique significantly improved the quality of clinical rotator cuff images, and performed almost as well as navigator echo correction while never degrading an image. One major criticism of such techniques is that they are computationally intensive, and reports of the processing time required have ranged form a few minutes to tens of minutes per slice. In this paper we describe a variety of improvements to our algorithm as well as approaches to correct sets of adjacent slices efficiently. The resulting algorithm is able to correct 256x256x20 clinical shoulder data sets for motion at an effective rate of 1 second/image on a standard commercial workstation. Future improvements in processor speeds and/or the use of specialized hardware will translate directly to corresponding reductions in this calculation time.

  13. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  14. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  15. Atmospheric correction of SeaWiFS imagery for turbid coastal and inland waters.

    PubMed

    Ruddick, K G; Ovidio, F; Rijkeboer, M

    2000-02-20

    The standard SeaWiFS atmospheric correction algorithm, designed for open ocean water, has been extended for use over turbid coastal and inland waters. Failure of the standard algorithm over turbid waters can be attributed to invalid assumptions of zero water-leaving radiance for the near-infrared bands at 765 and 865 nm. In the present study these assumptions are replaced by the assumptions of spatial homogeneity of the 765:865-nm ratios for aerosol reflectance and for water-leaving reflectance. These two ratios are imposed as calibration parameters after inspection of the Rayleigh-corrected reflectance scatterplot. The performance of the new algorithm is demonstrated for imagery of Belgian coastal waters and yields physically realistic water-leaving radiance spectra. A preliminary comparison with in situ radiance spectra for the Dutch Lake Markermeer shows significant improvement over the standard atmospheric correction algorithm. An analysis is made of the sensitivity of results to the choice of calibration parameters, and perspectives for application of the method to other sensors are briefly discussed.

  16. Improvement of dose calculation in radiation therapy due to metal artifact correction using the augmented likelihood image reconstruction.

    PubMed

    Ziemann, Christian; Stille, Maik; Cremers, Florian; Buzug, Thorsten M; Rades, Dirk

    2018-04-17

    Metal artifacts caused by high-density implants lead to incorrectly reconstructed Hounsfield units in computed tomography images. This can result in a loss of accuracy in dose calculation in radiation therapy. This study investigates the potential of the metal artifact reduction algorithms, Augmented Likelihood Image Reconstruction and linear interpolation, in improving dose calculation in the presence of metal artifacts. In order to simulate a pelvis with a double-sided total endoprosthesis, a polymethylmethacrylate phantom was equipped with two steel bars. Artifacts were reduced by applying the Augmented Likelihood Image Reconstruction, a linear interpolation, and a manual correction approach. Using the treatment planning system Eclipse™, identical planning target volumes for an idealized prostate as well as structures for bladder and rectum were defined in corrected and noncorrected images. Volumetric modulated arc therapy plans have been created with double arc rotations with and without avoidance sectors that mask out the prosthesis. The irradiation plans were analyzed for variations in the dose distribution and their homogeneity. Dosimetric measurements were performed using isocentric positioned ionization chambers. Irradiation plans based on images containing artifacts lead to a dose error in the isocenter of up to 8.4%. Corrections with the Augmented Likelihood Image Reconstruction reduce this dose error to 2.7%, corrections with linear interpolation to 3.2%, and manual artifact correction to 4.1%. When applying artifact correction, the dose homogeneity was slightly improved for all investigated methods. Furthermore, the calculated mean doses are higher for rectum and bladder if avoidance sectors are applied. Streaking artifacts cause an imprecise dose calculation within irradiation plans. Using a metal artifact correction algorithm, the planning accuracy can be significantly improved. Best results were accomplished using the Augmented Likelihood Image Reconstruction algorithm. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. Corner detection and sorting method based on improved Harris algorithm in camera calibration

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang

    2016-11-01

    In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.

  18. Quantitative Electron Probe Microanalysis: State of the Art

    NASA Technical Reports Server (NTRS)

    Carpernter, P. K.

    2005-01-01

    Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.

  19. Optimization-based scatter estimation using primary modulation for computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less

  20. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  1. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  2. Automated interferometric synthetic aperture microscopy and computational adaptive optics for improved optical coherence tomography.

    PubMed

    Xu, Yang; Liu, Yuan-Zhi; Boppart, Stephen A; Carney, P Scott

    2016-03-10

    In this paper, we introduce an algorithm framework for the automation of interferometric synthetic aperture microscopy (ISAM). Under this framework, common processing steps such as dispersion correction, Fourier domain resampling, and computational adaptive optics aberration correction are carried out as metrics-assisted parameter search problems. We further present the results of this algorithm applied to phantom and biological tissue samples and compare with manually adjusted results. With the automated algorithm, near-optimal ISAM reconstruction can be achieved without manual adjustment. At the same time, the technical barrier for the nonexpert using ISAM imaging is also significantly lowered.

  3. High-order flux correction/finite difference schemes for strand grids

    NASA Astrophysics Data System (ADS)

    Katz, Aaron; Work, Dalon

    2015-02-01

    A novel high-order method combining unstructured flux correction along body surfaces and high-order finite differences normal to surfaces is formulated for unsteady viscous flows on strand grids. The flux correction algorithm is applied in each unstructured layer of the strand grid, and the layers are then coupled together via a source term containing derivatives in the strand direction. Strand-direction derivatives are approximated to high-order via summation-by-parts operators for first derivatives and second derivatives with variable coefficients. We show how this procedure allows for the proper truncation error canceling properties required for the flux correction scheme. The resulting scheme possesses third-order design accuracy, but often exhibits fourth-order accuracy when higher-order derivatives are employed in the strand direction, especially for highly viscous flows. We prove discrete conservation for the new scheme and time stability in the absence of the flux correction terms. Results in two dimensions are presented that demonstrate improvements in accuracy with minimal computational and algorithmic overhead over traditional second-order algorithms.

  4. Evaluation of atmospheric correction algorithms for processing SeaWiFS data

    NASA Astrophysics Data System (ADS)

    Ransibrahmanakul, Varis; Stumpf, Richard; Ramachandran, Sathyadev; Hughes, Kent

    2005-08-01

    To enable the production of the best chlorophyll products from SeaWiFS data NOAA (Coastwatch and NOS) evaluated the various atmospheric correction algorithms by comparing the satellite derived water reflectance derived for each algorithm with in situ data. Gordon and Wang (1994) introduced a method to correct for Rayleigh and aerosol scattering in the atmosphere so that water reflectance may be derived from the radiance measured at the top of the atmosphere. However, since the correction assumed near infrared scattering to be negligible in coastal waters an invalid assumption, the method over estimates the atmospheric contribution and consequently under estimates water reflectance for the lower wavelength bands on extrapolation. Several improved methods to estimate near infrared correction exist: Siegel et al. (2000); Ruddick et al. (2000); Stumpf et al. (2002) and Stumpf et al. (2003), where an absorbing aerosol correction is also applied along with an additional 1.01% calibration adjustment for the 412 nm band. The evaluation show that the near infrared correction developed by Stumpf et al. (2003) result in an overall minimum error for U.S. waters. As of July 2004, NASA (SEADAS) has selected this as the default method for the atmospheric correction used to produce chlorophyll products.

  5. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  6. Improved HDRG decoders for qudit and non-Abelian quantum error correction

    NASA Astrophysics Data System (ADS)

    Hutter, Adrian; Loss, Daniel; Wootton, James R.

    2015-03-01

    Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.

  7. A robust method for removal of glint effects from satellite ocean colour imagery

    NASA Astrophysics Data System (ADS)

    Singh, R. K.; Shanmugam, P.

    2014-12-01

    Removal of the glint effects from satellite imagery for accurate retrieval of water-leaving radiances is a complicated problem since its contribution in the measured signal is dependent on many factors such as viewing geometry, sun elevation and azimuth, illumination conditions, wind speed and direction, and the water refractive index. To simplify the situation, existing glint correction models describe the extent of the glint-contaminated region and its contribution to the radiance essentially as a function of the wind speed and sea surface slope that often lead to a tremendous loss of information with a considerable scientific and financial impact. Even with the glint-tilting capability of modern sensors, glint contamination is severe on the satellite-derived ocean colour products in the equatorial and sub-tropical regions. To rescue a significant portion of data presently discarded as "glint contaminated" and improving the accuracy of water-leaving radiances in the glint contaminated regions, we developed a glint correction algorithm which is dependent only on the satellite derived Rayleigh Corrected Radiance and absorption by clear waters. The new algorithm is capable of achieving meaningful retrievals of ocean radiances from the glint-contaminated pixels unless saturated by strong glint in any of the wavebands. It takes into consideration the combination of the background absorption of radiance by water and the spectral glint function, to accurately minimize the glint contamination effects and produce robust ocean colour products. The new algorithm is implemented along with an aerosol correction method and its performance is demonstrated for many MODIS-Aqua images over the Arabian Sea, one of the regions that are heavily affected by sunglint due to their geographical location. The results with and without sunglint correction are compared indicating major improvements in the derived products with sunglint correction. When compared to the results of an existing model in the SeaDAS processing system, the new algorithm has the best performance in terms of yielding physically realistic water-leaving radiance spectra and improving the accuracy of the ocean colour products. Validation of MODIS-Aqua derived water-leaving radiances with in-situ data also corroborates the above results. Unlike the standard models, the new algorithm performs well in variable illumination and wind conditions and does not require any auxiliary data besides the Rayleigh-corrected radiance itself. Exploitation of signals observed by sensors looking within regions affected by bright white sunglint is possible with the present algorithm when the requirement of a stable response over a wide dynamical range for these sensors is fulfilled.

  8. Continuous Glucose Monitoring in Subjects with Type 1 Diabetes: Improvement in Accuracy by Correcting for Background Current

    PubMed Central

    Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth

    2010-01-01

    Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968

  9. Calculation of the Respiratory Modulation of the Photoplethysmogram (DPOP) Incorporating a Correction for Low Perfusion

    PubMed Central

    Addison, Paul S.; Wang, Rui; McGonigle, Scott J.; Bergese, Sergio D.

    2014-01-01

    DPOP quantifies respiratory modulations in the photoplethysmogram. It has been proposed as a noninvasive surrogate for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. The correlation between DPOP and PPV may degrade due to low perfusion effects. We implemented an automated DPOP algorithm with an optional correction for low perfusion. These two algorithm variants (DPOPa and DPOPb) were tested on data from 20 mechanically ventilated OR patients split into a benign “stable region” subset and a whole record “global set.” Strong correlation was found between DPOP and PPV for both algorithms when applied to the stable data set: R = 0.83/0.85 for DPOPa/DPOPb. However, a marked improvement was found when applying the low perfusion correction to the global data set: R = 0.47/0.73 for DPOPa/DPOPb. Sensitivities, Specificities, and AUCs were 0.86, 0.70, and 0.88 for DPOPa/stable region; 0.89, 0.82, and 0.92 for DPOPb/stable region; 0.81, 0.61, and 0.73 for DPOPa/global region; 0.83, 0.76, and 0.86 for DPOPb/global region. An improvement was found in all results across both data sets when using the DPOPb algorithm. Further, DPOPb showed marked improvements, both in terms of its values, and correlation with PPV, for signals exhibiting low percent modulations. PMID:25177348

  10. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm.

    PubMed

    Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib

    2008-10-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.

  11. Study on improved Ip-iq APF control algorithm and its application in micro grid

    NASA Astrophysics Data System (ADS)

    Xie, Xifeng; Shi, Hua; Deng, Haiyingv

    2018-01-01

    In order to enhance the tracking velocity and accuracy of harmonic detection by ip-iq algorithm, a novel ip-iq control algorithm based on the Instantaneous reactive power theory is presented, the improved algorithm adds the lead correction link to adjust the zero point of the detection system, the Fuzzy Self-Tuning Adaptive PI control is introduced to dynamically adjust the DC-link Voltage, which meets the requirement of the harmonic compensation of the micro grid. Simulation and experimental results verify the proposed method is feasible and effective in micro grid.

  12. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  13. The NOAA-NASA CZCS Reanalysis Effort

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Conkright, Margarita E.; OReilly, John E.; Patt, Frederick S.; Wang, Meng-Hua; Yoder, James; Casey-McCabe, Nancy; Koblinsky, Chester J. (Technical Monitor)

    2001-01-01

    Satellite observations of global ocean chlorophyll span over two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the NOAA-NASA CZCS Reanalysis (NCR) Effort. NCR consisted of 1) algorithm improvement (AI), where CZCS processing algorithms were improved using modernized atmospheric correction and bio-optical algorithms, and 2) blending, where in situ data were incorporated into the CZCS AI to minimize residual errors. The results indicated major improvement over the previously available CZCS archive. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.

  14. Atmospheric correction over coastal waters using multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Li, W.; Charles, G.; Jamet, C.; Zibordi, G.; Schroeder, T.; Stamnes, K. H.

    2017-12-01

    Standard atmospheric correction (AC) algorithms work well in open ocean areas where the water inherent optical properties (IOPs) are correlated with pigmented particles. However, the IOPs of turbid coastal waters may independently vary with pigmented particles, suspended inorganic particles, and colored dissolved organic matter (CDOM). In turbid coastal waters standard AC algorithms often exhibit large inaccuracies that may lead to negative water-leaving radiances (Lw) or remote sensing reflectance (Rrs). We introduce a new atmospheric correction algorithm for coastal waters based on a multilayer neural network (MLNN) machine learning method. We use a coupled atmosphere-ocean radiative transfer model to simulate the Rayleigh-corrected radiance (Lrc) at the top of the atmosphere (TOA) and the Rrs just above the surface simultaneously, and train a MLNN to derive the aerosol optical depth (AOD) and Rrs directly from the TOA Lrc. The SeaDAS NIR algorithm, the SeaDAS NIR/SWIR algorithm, and the MODIS version of the Case 2 regional water - CoastColour (C2RCC) algorithm are included in the comparison with AERONET-OC measurements. The results show that the MLNN algorithm significantly improves retrieval of normalized Lw in blue bands (412 nm and 443 nm) and yields minor improvements in green and red bands. These results indicate that the MLNN algorithm is suitable for application in turbid coastal waters. Application of the MLNN algorithm to MODIS Aqua images in several coastal areas also shows that it is robust and resilient to contamination due to sunglint or adjacency effects of land and cloud edges. The MLNN algorithm is very fast once the neural network has been properly trained and is therefore suitable for operational use. A significant advantage of the MLNN algorithm is that it does not need SWIR bands, which implies significant cost reduction for dedicated OC missions. A recent effort has been made to extend the MLNN AC algorithm to extreme atmospheric conditions (i.e. heavy polluted continental aerosols) over coastal areas by including additional aerosol and ocean models to generate the training dataset. Preliminary tests show very good results. Results of applying the extended MLNN algorithm to VIIRS images over the Yellow Sea and East China Sea areas with extreme atmospheric and marine conditions will be provided.

  15. Clinical Evaluation of 68Ga-PSMA-II and 68Ga-RM2 PET Images Reconstructed With an Improved Scatter Correction Algorithm.

    PubMed

    Wangerin, Kristen A; Baratto, Lucia; Khalighi, Mohammad Mehdi; Hope, Thomas A; Gulaka, Praveen K; Deller, Timothy W; Iagaru, Andrei H

    2018-06-06

    Gallium-68-labeled radiopharmaceuticals pose a challenge for scatter estimation because their targeted nature can produce high contrast in these regions of the kidneys and bladder. Even small errors in the scatter estimate can result in washout artifacts. Administration of diuretics can reduce these artifacts, but they may result in adverse events. Here, we investigated the ability of algorithmic modifications to mitigate washout artifacts and eliminate the need for diuretics or other interventions. The model-based scatter algorithm was modified to account for PET/MRI scanner geometry and challenges of non-FDG tracers. Fifty-three clinical 68 Ga-RM2 and 68 Ga-PSMA-11 whole-body images were reconstructed using the baseline scatter algorithm. For comparison, reconstruction was also processed with modified sampling in the single-scatter estimation and with an offset in the scatter tail-scaling process. None of the patients received furosemide to attempt to decrease the accumulation of radiopharmaceuticals in the bladder. The images were scored independently by three blinded reviewers using the 5-point Likert scale. The scatter algorithm improvements significantly decreased or completely eliminated the washout artifacts. When comparing the baseline and most improved algorithm, the image quality increased and image artifacts were reduced for both 68 Ga-RM2 and for 68 Ga-PSMA-11 in the kidneys and bladder regions. Image reconstruction with the improved scatter correction algorithm mitigated washout artifacts and recovered diagnostic image quality in 68 Ga PET, indicating that the use of diuretics may be avoided.

  16. Retrieval of atmospheric properties from hyper and multispectral imagery with the FLAASH atmospheric correction algorithm

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald

    2005-10-01

    Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.

  17. Shading correction algorithm for cone-beam CT in radiotherapy: extensive clinical validation of image quality improvement

    NASA Astrophysics Data System (ADS)

    Joshi, K. D.; Marchant, T. E.; Moore, C. J.

    2017-03-01

    A shading correction algorithm for the improvement of cone-beam CT (CBCT) images (Phys. Med. Biol. 53 5719{33) has been further developed, optimised and validated extensively using 135 clinical CBCT images of patients undergoing radiotherapy treatment of the pelvis, lungs and head and neck. An automated technique has been developed to efficiently analyse the large number of clinical images. Small regions of similar tissue (for example fat tissue) are automatically identified using CT images. The same regions on the corresponding CBCT image are analysed to ensure that they do not contain pixels representing multiple types of tissue. The mean value of all selected pixels and the non-uniformity, defined as the median absolute deviation of the mean values in each small region, are calculated. Comparisons between CT and raw and corrected CBCT images are then made. Analysis of fat regions in pelvis images shows an average difference in mean pixel value between CT and CBCT of 136:0 HU in raw CBCT images, which is reduced to 2:0 HU after the application of the shading correction algorithm. The average difference in non-uniformity of fat pixels is reduced from 33:7 in raw CBCT to 2:8 in shading-corrected CBCT images. Similar results are obtained in the analysis of lung and head and neck images.

  18. Alignment algorithms and per-particle CTF correction for single particle cryo-electron tomography.

    PubMed

    Galaz-Montoya, Jesús G; Hecksel, Corey W; Baldwin, Philip R; Wang, Eryu; Weaver, Scott C; Schmid, Michael F; Ludtke, Steven J; Chiu, Wah

    2016-06-01

    Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen, the cryo-electron microscopy (cryoEM) grid and/or the carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Alignment Algorithms and Per-Particle CTF Correction for Single Particle Cryo-Electron Tomography

    PubMed Central

    Galaz-Montoya, Jesús G.; Hecksel, Corey W.; Baldwin, Philip R.; Wang, Eryu; Weaver, Scott C.; Schmid, Michael F.; Ludtke, Steven J.; Chiu, Wah

    2016-01-01

    Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen grid and carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions. PMID:27016284

  20. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536

  1. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    PubMed

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  2. O2 A Band Studies for Cloud Detection and Algorithm Improvement

    NASA Technical Reports Server (NTRS)

    Chance, K. V.

    1996-01-01

    Detection of cloud parameters from space-based spectrometers can employ the vibrational bands of O2 in the (sup b1)Sigma(sub +)(sub g) yields X(sub 3) Sigma(sup -)(sub g) spin-forbidden electronic transition manifold, particularly the Delta nu = 0 A band. The GOME instrument uses the A band in the Initial Cloud Fitting Algorithm (ICFA). The work reported here consists of making substantial improvements in the line-by-line spectral database for the A band, testing whether an additional correction to the line shape function is necessary in order to correctly model the atmospheric transmission in this band, and calculating prototype cloud and ground template spectra for comparison with satellite measurements.

  3. Antenna pattern correction for the Nimbus-7 SMMR

    NASA Technical Reports Server (NTRS)

    Milman, A. S.

    1986-01-01

    This paper describes the philosophy and method used to develop the antenna pattern correction (APC) algorithm that was used on the data from the Scanning Multichannel Microwave Radiometer (SMMR) on Nimbus-7. There are limitations on what can be accomplished with such a procedure; these limitations are explored with the aid of Fourier analysis, even though the algorithm used on the SMMR data does not perform any Fourier transforms. The resulting analysis showed that, for the SMMR instrument, no useful improvement could be made in the data in terms of reduction of side lobes, but the quality of the sea surface temperature retrievals could be improved considerably by matching the antenna beamwidths at the different frequencies.

  4. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  5. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  6. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  7. Spectral correction algorithm for multispectral CdTe x-ray detectors

    NASA Astrophysics Data System (ADS)

    Christensen, Erik D.; Kehres, Jan; Gu, Yun; Feidenhans'l, Robert; Olsen, Ulrik L.

    2017-09-01

    Compared to the dual energy scintillator detectors widely used today, pixelated multispectral X-ray detectors show the potential to improve material identification in various radiography and tomography applications used for industrial and security purposes. However, detector effects, such as charge sharing and photon pileup, distort the measured spectra in high flux pixelated multispectral detectors. These effects significantly reduce the detectors' capabilities to be used for material identification, which requires accurate spectral measurements. We have developed a semi analytical computational algorithm for multispectral CdTe X-ray detectors which corrects the measured spectra for severe spectral distortions caused by the detector. The algorithm is developed for the Multix ME100 CdTe X-ray detector, but could potentially be adapted for any pixelated multispectral CdTe detector. The calibration of the algorithm is based on simple attenuation measurements of commercially available materials using standard laboratory sources, making the algorithm applicable in any X-ray setup. The validation of the algorithm has been done using experimental data acquired with both standard lab equipment and synchrotron radiation. The experiments show that the algorithm is fast, reliable even at X-ray flux up to 5 Mph/s/mm2, and greatly improves the accuracy of the measured X-ray spectra, making the algorithm very useful for both security and industrial applications where multispectral detectors are used.

  8. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  9. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  10. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    NASA Astrophysics Data System (ADS)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  11. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  12. A two-dimensional matrix correction for off-axis portal dose prediction errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Daniel W.; Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263; Kumaraswamy, Lalith

    2013-05-15

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axismore » prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in the 1D correction case, the 2D algorithm leaves the portal dosimetry process virtually unchanged in the central portion of the detector, and thus these correction algorithms are not needed for centrally located fields of moderate size (at least, in the case of 6 MV beam energy).Conclusion: The 2D correction improves the portal dosimetry results for those fields for which the 1D correction proves insufficient, especially in the inplane, off-axis regions of the detector. This 2D correction neglects the relatively smaller discrepancies that may be caused by backscatter from nonuniform machine components downstream from the detecting layer.« less

  13. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    NASA Technical Reports Server (NTRS)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  14. Adaptive noise correction of dual-energy computed tomography images.

    PubMed

    Maia, Rafael Simon; Jacob, Christian; Hara, Amy K; Silva, Alvin C; Pavlicek, William; Mitchell, J Ross

    2016-04-01

    Noise reduction in material density images is a necessary preprocessing step for the correct interpretation of dual-energy computed tomography (DECT) images. In this paper we describe a new method based on a local adaptive processing to reduce noise in DECT images An adaptive neighborhood Wiener (ANW) filter was implemented and customized to use local characteristics of material density images. The ANW filter employs a three-level wavelet approach, combined with the application of an anisotropic diffusion filter. Material density images and virtual monochromatic images are noise corrected with two resulting noise maps. The algorithm was applied and quantitatively evaluated in a set of 36 images. From that set of images, three are shown here, and nine more are shown in the online supplementary material. Processed images had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than the raw material density images. The average improvements in SNR and CNR for the material density images were 56.5 and 54.75%, respectively. We developed a new DECT noise reduction algorithm. We demonstrate throughout a series of quantitative analyses that the algorithm improves the quality of material density images and virtual monochromatic images.

  15. Weighted divergence correction scheme and its fast implementation

    NASA Astrophysics Data System (ADS)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  16. Correction of partial volume effect in (18)F-FDG PET brain studies using coregistered MR volumes: voxel based analysis of tracer uptake in the white matter.

    PubMed

    Coello, Christopher; Willoch, Frode; Selnes, Per; Gjerstad, Leif; Fladby, Tormod; Skretting, Arne

    2013-05-15

    A voxel-based algorithm to correct for partial volume effect in PET brain volumes is presented. This method (named LoReAn) is based on MRI based segmentation of anatomical regions and accurate measurements of the effective point spread function of the PET imaging process. The objective is to correct for the spill-out of activity from high-uptake anatomical structures (e.g. grey matter) into low-uptake anatomical structures (e.g. white matter) in order to quantify physiological uptake in the white matter. The new algorithm is presented and validated against the state of the art region-based geometric transfer matrix (GTM) method with synthetic and clinical data. Using synthetic data, both bias and coefficient of variation were improved in the white matter region using LoReAn compared to GTM. An increased number of anatomical regions doesn't affect the bias (<5%) and misregistration affects equally LoReAn and GTM algorithms. The LoReAn algorithm appears to be a simple and promising voxel-based algorithm for studying metabolism in white matter regions. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Fast correction approach for wavefront sensorless adaptive optics based on a linear phase diversity technique.

    PubMed

    Yue, Dan; Nie, Haitao; Li, Ye; Ying, Changsheng

    2018-03-01

    Wavefront sensorless (WFSless) adaptive optics (AO) systems have been widely studied in recent years. To reach optimum results, such systems require an efficient correction method. This paper presents a fast wavefront correction approach for a WFSless AO system mainly based on the linear phase diversity (PD) technique. The fast closed-loop control algorithm is set up based on the linear relationship between the drive voltage of the deformable mirror (DM) and the far-field images of the system, which is obtained through the linear PD algorithm combined with the influence function of the DM. A large number of phase screens under different turbulence strengths are simulated to test the performance of the proposed method. The numerical simulation results show that the method has fast convergence rate and strong correction ability, a few correction times can achieve good correction results, and can effectively improve the imaging quality of the system while needing fewer measurements of CCD data.

  18. Multi-model data fusion to improve an early warning system for hypo-/hyperglycemic events.

    PubMed

    Botwey, Ransford Henry; Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G

    2014-01-01

    Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.

  19. Improving best-phase image quality in cardiac CT by motion correction with MAM optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl

    2013-03-15

    Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phasemore » (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum improvement of the NCC value by 100% and of the RMSD value by 81%. The corresponding maximum improvements for the registration-based approach were 20% and 40%. In phases with very rapid motion the registration-based algorithm obtained better image quality, while the image quality of the MAM algorithm was superior in phases with less motion. The image quality improvement of the MAM optimization was visually confirmed for the different clinical cases. Conclusions: The proposed method allows a software-based best-phase image quality improvement in coronary CT angiography. A short scan data interval at the target heart phase is sufficient, no additional scan data in other cardiac phases are required. The algorithm is therefore directly applicable to any standard cardiac CT acquisition protocol.« less

  20. Expeditious reconciliation for practical quantum key distribution

    NASA Astrophysics Data System (ADS)

    Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.

    2004-08-01

    The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.

  1. A Novel Segment-Based Approach for Improving Classification Performance of Transport Mode Detection.

    PubMed

    Guvensan, M Amac; Dusun, Burak; Can, Baris; Turkmen, H Irem

    2017-12-30

    Transportation planning and solutions have an enormous impact on city life. To minimize the transport duration, urban planners should understand and elaborate the mobility of a city. Thus, researchers look toward monitoring people's daily activities including transportation types and duration by taking advantage of individual's smartphones. This paper introduces a novel segment-based transport mode detection architecture in order to improve the results of traditional classification algorithms in the literature. The proposed post-processing algorithm, namely the Healing algorithm, aims to correct the misclassification results of machine learning-based solutions. Our real-life test results show that the Healing algorithm could achieve up to 40% improvement of the classification results. As a result, the implemented mobile application could predict eight classes including stationary, walking, car, bus, tram, train, metro and ferry with a success rate of 95% thanks to the proposed multi-tier architecture and Healing algorithm.

  2. Bidirectional reflectance function in coastal waters: modeling and validation

    NASA Astrophysics Data System (ADS)

    Gilerson, Alex; Hlaing, Soe; Harmel, Tristan; Tonizzo, Alberto; Arnone, Robert; Weidemann, Alan; Ahmed, Samir

    2011-11-01

    The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms, specifically tuned for typical coastal waters and other case 2 conditions, are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multi- and hyperspectral radiometers which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths.

  3. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  4. Chlorophyll-a Algorithms for Oligotrophic Oceans: A Novel Approach Based on Three-Band Reflectance Difference

    NASA Technical Reports Server (NTRS)

    Hu, Chuanmin; Lee, Zhongping; Franz, Bryan

    2011-01-01

    A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.

  5. Update on laser vision correction using wavefront analysis with the CustomCornea system and LADARVision 193-nm excimer laser

    NASA Astrophysics Data System (ADS)

    Maguen, Ezra I.; Salz, James J.; McDonald, Marguerite B.; Pettit, George H.; Papaioannou, Thanassis; Grundfest, Warren S.

    2002-06-01

    A study was undertaken to assess whether results of laser vision correction with the LADARVISION 193-nm excimer laser (Alcon-Autonomous technologies) can be improved with the use of wavefront analysis generated by a proprietary system including a Hartman-Schack sensor and expressed using Zernicke polynomials. A total of 82 eyes underwent LASIK in several centers with an improved algorithm, using the CustomCornea system. A subgroup of 48 eyes of 24 patients was randomized so that one eye undergoes conventional treatment and one eye undergoes treatment based on wavefront analysis. Treatment parameters were equal for each type of refractive error. 83% of all eyes had uncorrected vision of 20/20 or better and 95% were 20/25 or better. In all groups, uncorrected visual acuities did not improve significantly in eyes treated with wavefront analysis compared to conventional treatments. Higher order aberrations were consistently better corrected in eyes undergoing treatment based on wavefront analysis for LASIK at 6 months postop. In addition, the number of eyes with reduced RMS was significantly higher in the subset of eyes treated with a wavefront algorithm (38% vs. 5%). Wavefront technology may improve the outcomes of laser vision correction with the LADARVISION excimer laser. Further refinements of the technology and clinical trials will contribute to this goal.

  6. Multi-frequency interpolation in spiral magnetic resonance fingerprinting for correction of off-resonance blurring.

    PubMed

    Ostenson, Jason; Robison, Ryan K; Zwart, Nicholas R; Welch, E Brian

    2017-09-01

    Magnetic resonance fingerprinting (MRF) pulse sequences often employ spiral trajectories for data readout. Spiral k-space acquisitions are vulnerable to blurring in the spatial domain in the presence of static field off-resonance. This work describes a blurring correction algorithm for use in spiral MRF and demonstrates its effectiveness in phantom and in vivo experiments. Results show that image quality of T1 and T2 parametric maps is improved by application of this correction. This MRF correction has negligible effect on the concordance correlation coefficient and improves coefficient of variation in regions of off-resonance relative to uncorrected measurements. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Evaluation of Residual Static Corrections by Hybrid Genetic Algorithm Steepest Ascent Autostatics Inversion.Application southern Algerian fields

    NASA Astrophysics Data System (ADS)

    Eladj, Said; bansir, fateh; ouadfeul, sid Ali

    2016-04-01

    The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow us to achieve a more accurate structural interpretation Key words: Hybrid Genetic Algorithm, number of generations, model space, local maxima, Number of hill climbing iteration, Minimum eigenvalue, cross-correlation table

  8. A finite element method to correct deformable image registration errors in low-contrast regions

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Kim, Jinkoo; Li, Haisen; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.

    2012-06-01

    Image-guided adaptive radiotherapy requires deformable image registration to map radiation dose back and forth between images. The purpose of this study is to develop a novel method to improve the accuracy of an intensity-based image registration algorithm in low-contrast regions. A computational framework has been developed in this study to improve the quality of the ‘demons’ registration. For each voxel in the registration's target image, the standard deviation of image intensity in a neighborhood of this voxel was calculated. A mask for high-contrast regions was generated based on their standard deviations. In the masked regions, a tetrahedral mesh was refined recursively so that a sufficient number of tetrahedral nodes in these regions can be selected as driving nodes. An elastic system driven by the displacements of the selected nodes was formulated using a finite element method (FEM) and implemented on the refined mesh. The displacements of these driving nodes were generated with the ‘demons’ algorithm. The solution of the system was derived using a conjugated gradient method, and interpolated to generate a displacement vector field for the registered images. The FEM correction method was compared with the ‘demons’ algorithm on the computed tomography (CT) images of lung and prostate patients. The performance of the FEM correction relating to the ‘demons’ registration was analyzed based on the physical property of their deformation maps, and quantitatively evaluated through a benchmark model developed specifically for this study. Compared to the benchmark model, the ‘demons’ registration has the maximum error of 1.2 cm, which can be corrected by the FEM to 0.4 cm, and the average error of the ‘demons’ registration is reduced from 0.17 to 0.11 cm. For the CT images of lung and prostate patients, the deformation maps generated by the ‘demons’ algorithm were found unrealistic at several places. In these places, the displacement differences between the ‘demons’ registrations and their FEM corrections were found in the range of 0.4 and 1.1 cm. The mesh refinement and FEM simulation were implemented in a single thread application which requires about 45 min of computation time on a 2.6 GHz computer. This study has demonstrated that the FEM can be integrated with intensity-based image registration algorithms to improve their registration accuracy, especially in low-contrast regions.

  9. An empirical method to correct for temperature-dependent variations in the overlap function of CHM15k ceilometers

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Poltera, Yann; Haefele, Alexander

    2016-07-01

    Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.

  10. A single-stage flux-corrected transport algorithm for high-order finite-volume methods

    DOE PAGES

    Chaplin, Christopher; Colella, Phillip

    2017-05-08

    We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less

  11. Beam-induced motion correction for sub-megadalton cryo-EM particles.

    PubMed

    Scheres, Sjors Hw

    2014-08-13

    In electron cryo-microscopy (cryo-EM), the electron beam that is used for imaging also causes the sample to move. This motion blurs the images and limits the resolution attainable by single-particle analysis. In a previous Research article (Bai et al., 2013) we showed that correcting for this motion by processing movies from fast direct-electron detectors allowed structure determination to near-atomic resolution from 35,000 ribosome particles. In this Research advance article, we show that an improved movie processing algorithm is applicable to a much wider range of specimens. The new algorithm estimates straight movement tracks by considering multiple particles that are close to each other in the field of view, and models the fall-off of high-resolution information content by radiation damage in a dose-dependent manner. Application of the new algorithm to four data sets illustrates its potential for significantly improving cryo-EM structures, even for particles that are smaller than 200 kDa. Copyright © 2014, Scheres.

  12. Single molecule sequencing-guided scaffolding and correction of draft assemblies.

    PubMed

    Zhu, Shenglong; Chen, Danny Z; Emrich, Scott J

    2017-12-06

    Although single molecule sequencing is still improving, the lengths of the generated sequences are inevitably an advantage in genome assembly. Prior work that utilizes long reads to conduct genome assembly has mostly focused on correcting sequencing errors and improving contiguity of de novo assemblies. We propose a disassembling-reassembling approach for both correcting structural errors in the draft assembly and scaffolding a target assembly based on error-corrected single molecule sequences. To achieve this goal, we formulate a maximum alternating path cover problem. We prove that this problem is NP-hard, and solve it by a 2-approximation algorithm. Our experimental results show that our approach can improve the structural correctness of target assemblies in the cost of some contiguity, even with smaller amounts of long reads. In addition, our reassembling process can also serve as a competitive scaffolder relative to well-established assembly benchmarks.

  13. HST image restoration: A comparison of pre- and post-servicing mission results

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Mo, J.

    1992-01-01

    A variety of image restoration techniques (e.g., Wiener filter, Lucy-Richardson, MEM) have been applied quite successfully to the aberrated HST images. The HST servicing mission (scheduled for late 1993 or early 1994) will install a corrective optics system (COSTAR) for the Faint Object Camera and spectrographs and replace the Wide Field/Planetary Camera with a second generation instrument (WF/PC-II) having its own corrective elements. The image quality is expected to be improved substantially with these new instruments. What then is the role of image restoration for the HST in the long term? Through a series of numerical experiments using model point-spread functions for both aberrated and unaberrated optics, we find that substantial improvements in image resolution can be obtained for post-servicing mission data using the same or similar algorithms as being employed now to correct aberrated images. Included in our investigations are studies of the photometric integrity of the restoration algorithms and explicit models for HST pointing errors (spacecraft jitter).

  14. TU-H-206-04: An Effective Homomorphic Unsharp Mask Filtering Method to Correct Intensity Inhomogeneity in Daily Treatment MR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, D; Gach, H; Li, H

    Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A bodymore » mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and clinical applications in radiotherapy. Authors have received research grants from ViewRay and Varian.« less

  15. Wavefront sensorless adaptive optics OCT with the DONE algorithm for in vivo human retinal imaging [Invited

    PubMed Central

    Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Jian, Yifan; Verhaegen, Michel; Sarunic, Marinko V.

    2017-01-01

    In this report, which is an international collaboration of OCT, adaptive optics, and control research, we demonstrate the Data-based Online Nonlinear Extremum-seeker (DONE) algorithm to guide the image based optimization for wavefront sensorless adaptive optics (WFSL-AO) OCT for in vivo human retinal imaging. The ocular aberrations were corrected using a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators. The DONE algorithm succeeded in drastically improving image quality and the OCT signal intensity, up to a factor seven, while achieving a computational time of 1 ms per iteration, making it applicable for many high speed applications. We demonstrate the correction of five aberrations using 70 iterations of the DONE algorithm performed over 2.8 s of continuous volumetric OCT acquisition. Data acquired from an imaging phantom and in vivo from human research volunteers are presented. PMID:28736670

  16. Wavefront sensorless adaptive optics OCT with the DONE algorithm for in vivo human retinal imaging [Invited].

    PubMed

    Verstraete, Hans R G W; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Jian, Yifan; Verhaegen, Michel; Sarunic, Marinko V

    2017-04-01

    In this report, which is an international collaboration of OCT, adaptive optics, and control research, we demonstrate the Data-based Online Nonlinear Extremum-seeker (DONE) algorithm to guide the image based optimization for wavefront sensorless adaptive optics (WFSL-AO) OCT for in vivo human retinal imaging. The ocular aberrations were corrected using a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators. The DONE algorithm succeeded in drastically improving image quality and the OCT signal intensity, up to a factor seven, while achieving a computational time of 1 ms per iteration, making it applicable for many high speed applications. We demonstrate the correction of five aberrations using 70 iterations of the DONE algorithm performed over 2.8 s of continuous volumetric OCT acquisition. Data acquired from an imaging phantom and in vivo from human research volunteers are presented.

  17. Improved chemical identification from sensor arrays using intelligent algorithms

    NASA Astrophysics Data System (ADS)

    Roppel, Thaddeus A.; Wilson, Denise M.

    2001-02-01

    Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.

  18. The Effectiveness of Neurofeedback Training in Algorithmic Thinking Skills Enhancement.

    PubMed

    Plerou, Antonia; Vlamos, Panayiotis; Triantafillidis, Chris

    2017-01-01

    Although research on learning difficulties are overall in an advanced stage, studies related to algorithmic thinking difficulties are limited, since interest in this field has been recently raised. In this paper, an interactive evaluation screener enhanced with neurofeedback elements, referring to algorithmic tasks solving evaluation, is proposed. The effect of HCI, color, narration and neurofeedback elements effect was evaluated in the case of algorithmic tasks assessment. Results suggest the enhanced performance in the case of neurofeedback trained group in terms of total correct and optimal algorithmic tasks solution. Furthermore, findings suggest that skills, concerning the way that an algorithm is conceived, designed, applied and evaluated are essentially improved.

  19. Testing for a slope-based decoupling algorithm in a woofer-tweeter adaptive optics system.

    PubMed

    Cheng, Tao; Liu, WenJin; Yang, KangJian; He, Xin; Yang, Ping; Xu, Bing

    2018-05-01

    It is well known that using two or more deformable mirrors (DMs) can improve the compensation ability of an adaptive optics (AO) system. However, to keep the stability of an AO system, the correlation between the multiple DMs must be suppressed during the correction. In this paper, we proposed a slope-based decoupling algorithm to simultaneous control the multiple DMs. In order to examine the validity and practicality of this algorithm, a typical woofer-tweeter (W-T) AO system was set up. For the W-T system, a theory model was simulated and the results indicated in theory that the algorithm we presented can selectively make woofer and tweeter correct different spatial frequency aberration and suppress the cross coupling between the dual DMs. At the same time, the experimental results for the W-T AO system were consistent with the results of the simulation, which demonstrated in practice that this algorithm is practical for the AO system with dual DMs.

  20. Underwater terrain-aided navigation system based on combination matching algorithm.

    PubMed

    Li, Peijuan; Sheng, Guoliang; Zhang, Xiaofei; Wu, Jingqiu; Xu, Baochun; Liu, Xing; Zhang, Yao

    2018-07-01

    Considering that the terrain-aided navigation (TAN) system based on iterated closest contour point (ICCP) algorithm diverges easily when the indicative track of strapdown inertial navigation system (SINS) is large, Kalman filter is adopted in the traditional ICCP algorithm, difference between matching result and SINS output is used as the measurement of Kalman filter, then the cumulative error of the SINS is corrected in time by filter feedback correction, and the indicative track used in ICCP is improved. The mathematic model of the autonomous underwater vehicle (AUV) integrated into the navigation system and the observation model of TAN is built. Proper matching point number is designated by comparing the simulation results of matching time and matching precision. Simulation experiments are carried out according to the ICCP algorithm and the mathematic model. It can be concluded from the simulation experiments that the navigation accuracy and stability are improved with the proposed combinational algorithm in case that proper matching point number is engaged. It will be shown that the integrated navigation system is effective in prohibiting the divergence of the indicative track and can meet the requirements of underwater, long-term and high precision of the navigation system for autonomous underwater vehicles. Copyright © 2017. Published by Elsevier Ltd.

  1. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.

    2009-06-01

    A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.

  2. A Comprehensive Two-Dimensional Retention Time Alignment Algorithm To Enhance Chemometric Analysis of Comprehensive Two-Dimensional Separation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Karisa M.; Wood, Lianna F.; Wright, Bob W.

    2005-12-01

    A comprehensive two-dimensional (2D) retention time alignment algorithm was developed using a novel indexing scheme. The algorithm is termed comprehensive because it functions to correct the entire chromatogram in both dimensions and it preserves the separation information in both dimensions. Although the algorithm is demonstrated by correcting comprehensive two-dimensional gas chromatography (GC x GC) data, the algorithm is designed to correct shifting in all forms of 2D separations, such as LC x LC, LC x CE, CE x CE, and LC x GC. This 2D alignment algorithm was applied to three different data sets composed of replicate GC x GCmore » separations of (1) three 22-component control mixtures, (2) three gasoline samples, and (3) three diesel samples. The three data sets were collected using slightly different temperature or pressure programs to engender significant retention time shifting in the raw data and then demonstrate subsequent corrections of that shifting upon comprehensive 2D alignment of the data sets. Thirty 12-min GC x GC separations from three 22-component control mixtures were used to evaluate the 2D alignment performance (10 runs/mixture). The average standard deviation of the first column retention time improved 5-fold from 0.020 min (before alignment) to 0.004 min (after alignment). Concurrently, the average standard deviation of second column retention time improved 4-fold from 3.5 ms (before alignment) to 0.8 ms (after alignment). Alignment of the 30 control mixture chromatograms took 20 min. The quantitative integrity of the GC x GC data following 2D alignment was also investigated. The mean integrated signal was determined for all components in the three 22-component mixtures for all 30 replicates. The average percent difference in the integrated signal for each component before and after alignment was 2.6%. Singular value decomposition (SVD) was applied to the 22-component control mixture data before and after alignment to show the restoration of trilinearity to the data, since trilinearity benefits chemometric analysis. By applying comprehensive 2D retention time alignment to all three data sets (control mixtures, gasoline samples, and diesel samples), classification by principal component analysis (PCA) substantially improved, resulting in 100% accurate scores clustering.« less

  3. Algorithms for selecting informative marker panels for population assignment.

    PubMed

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  4. NOAA-NASA Coastal Zone Color Scanner reanalysis effort.

    PubMed

    Gregg, Watson W; Conkright, Margarita E; O'Reilly, John E; Patt, Frederick S; Wang, Menghua H; Yoder, James A; Casey, Nancy W

    2002-03-20

    Satellite observations of global ocean chlorophyll span more than two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the National Oceanic and Atmospheric Administration and National Aeronautics and Space Administration (NOAA-NASA) CZCS reanalysis (NCR) effort. NCR consisted of (1) algorithm improvement (AI), where CZCS processing algorithms were improved with modernized atmospheric correction and bio-optical algorithms and (2) blending where in situ data were incorporated into the CZCS AI to minimize residual errors. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.

  5. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  6. Voidage correction algorithm for unresolved Euler-Lagrange simulations

    NASA Astrophysics Data System (ADS)

    Askarishahi, Maryam; Salehi, Mohammad-Sadegh; Radl, Stefan

    2018-04-01

    The effect of grid coarsening on the predicted total drag force and heat exchange rate in dense gas-particle flows is investigated using Euler-Lagrange (EL) approach. We demonstrate that grid coarsening may reduce the predicted total drag force and exchange rate. Surprisingly, exchange coefficients predicted by the EL approach deviate more significantly from the exact value compared to results of Euler-Euler (EE)-based calculations. The voidage gradient is identified as the root cause of this peculiar behavior. Consequently, we propose a correction algorithm based on a sigmoidal function to predict the voidage experienced by individual particles. Our correction algorithm can significantly improve the prediction of exchange coefficients in EL models, which is tested for simulations involving Euler grid cell sizes between 2d_p and 12d_p . It is most relevant in simulations of dense polydisperse particle suspensions featuring steep voidage profiles. For these suspensions, classical approaches may result in an error of the total exchange rate of up to 30%.

  7. An improved finger-vein recognition algorithm based on template matching

    NASA Astrophysics Data System (ADS)

    Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping

    2016-10-01

    Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.

  8. Fast transform decoding of nonsystematic Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Cheung, K.-M.; Reed, I. S.; Shiozaki, A.

    1989-01-01

    A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips.

  9. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  10. A stable second order method for training back propagation networks

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.

    1993-01-01

    A simple method for improving the learning rate of the back-propagation algorithm is described. The basis of the method is that approximate second order corrections can be incorporated in the output units. The extended method leads to significant improvements in the convergence rate.

  11. Histogram-driven cupping correction (HDCC) in CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  12. Communication: An efficient and accurate perturbative correction to initiator full configuration interaction quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Blunt, Nick S.

    2018-06-01

    We present a perturbative correction within initiator full configuration interaction quantum Monte Carlo (i-FCIQMC). In the existing i-FCIQMC algorithm, a significant number of spawned walkers are discarded due to the initiator criteria. Here we show that these discarded walkers have a form that allows the calculation of a second-order Epstein-Nesbet correction, which may be accumulated in a trivial and inexpensive manner, yet substantially improves i-FCIQMC results. The correction is applied to the Hubbard model and the uniform electron gas and molecular systems.

  13. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    PubMed

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Clinical evaluation of the iterative metal artifact reduction algorithm for CT simulation in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Axente, Marian; Von Eyben, Rie; Hristov, Dimitre, E-mail: dimitre.hristov@stanford.edu

    2015-03-15

    Purpose: To clinically evaluate an iterative metal artifact reduction (IMAR) algorithm prototype in the radiation oncology clinic setting by testing for accuracy in CT number retrieval, relative dosimetric changes in regions affected by artifacts, and improvements in anatomical and shape conspicuity of corrected images. Methods: A phantom with known material inserts was scanned in the presence/absence of metal with different configurations of placement and sizes. The relative change in CT numbers from the reference data (CT with no metal) was analyzed. The CT studies were also used for dosimetric tests where dose distributions from both photon and proton beams weremore » calculated. Dose differences and gamma analysis were calculated to quantify the relative changes between doses calculated on the different CT studies. Data from eight patients (all different treatment sites) were also used to quantify the differences between dose distributions before and after correction with IMAR, with no reference standard. A ranking experiment was also conducted to analyze the relative confidence of physicians delineating anatomy in the near vicinity of the metal implants. Results: IMAR corrected images proved to accurately retrieve CT numbers in the phantom study, independent of metal insert configuration, size of the metal, and acquisition energy. For plastic water, the mean difference between corrected images and reference images was −1.3 HU across all scenarios (N = 37) with a 90% confidence interval of [−2.4, −0.2] HU. While deviations were relatively higher in images with more metal content, IMAR was able to effectively correct the CT numbers independent of the quantity of metal. Residual errors in the CT numbers as well as some induced by the correction algorithm were found in the IMAR corrected images. However, the dose distributions calculated on IMAR corrected images were closer to the reference data in phantom studies. Relative spatial difference in the dose distributions in the regions affected by the metal artifacts was also observed in patient data. However, in absence of a reference ground truth (CT set without metal inserts), these differences should not be interpreted as improvement/deterioration of the accuracy of calculated dose. With limited data presented, it was observed that proton dosimetry was affected more than photons as expected. Physicians were significantly more confident contouring anatomy in the regions affected by artifacts. While site specific preferences were detected, all indicated that they would consistently use IMAR corrected images. Conclusions: IMAR correction algorithm could be readily implemented in an existing clinical workflow upon commercial release. While residual errors still exist in IMAR corrected images, these images present with better overall conspicuity of the patient/phantom geometry and offer more accurate CT numbers for improved local dosimetry. The variety of different scenarios included herein attest to the utility of the evaluated IMAR for a wide range of radiotherapy clinical scenarios.« less

  15. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGES

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; ...

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. As a result, this is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  16. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    NASA Astrophysics Data System (ADS)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-01

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.

  17. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  18. Evaluation of MERIS products from Baltic Sea coastal waters rich in CDOM

    NASA Astrophysics Data System (ADS)

    Beltrán-Abaunza, J. M.; Kratzer, S.; Brockmann, C.

    2013-11-01

    In this study, retrievals of the medium resolution imaging spectrometer (MERIS) reflectances and water quality products using 4 different coastal processing algorithms freely available are assessed by comparison against sea-truthing data. The study is based on a pair-wise comparison using processor-dependent quality flags for the retrieval of valid common macro-pixels. This assessment is required in order to ensure the reliability of monitoring systems based on MERIS data, such as the Swedish coastal and lake monitoring system (http.vattenkvalitet.se). The results show that the pre-processing with the Improved Contrast between Ocean and Land (ICOL) processor, correcting for adjacency effects, improve the retrieval of spectral reflectance for all processors, Therefore, it is recommended that the ICOL processor should be applied when Baltic coastal waters are investigated. Chlorophyll was retrieved best using the FUB (Free University of Berlin) processing algorithm, although overestimations in the range 18-26.5%, dependent on the compared pairs, were obtained. At low chlorophyll concentrations (< 2.5 mg m-3), random errors dominated in the retrievals with the MEGS (MERIS ground segment processor) processor. The lowest bias and random errors were obtained with MEGS for suspended particulate matter, for which overestimations in te range of 8-16% were found. Only the FUB retrieved CDOM (Coloured Dissolved Organic Matter) correlate with in situ values. However, a large systematic underestimation appears in the estimates that nevertheless may be corrected for by using a~local correction factor. The MEGS has the potential to be used as an operational processing algorithm for the Himmerfjärden bay and adjacent areas, but it requires further improvement of the atmospheric correction for the blue bands and better definition at relatively low chlorophyll concentrations in presence of high CDOM attenuation.

  19. Evaluation of MERIS products from Baltic Sea coastal waters rich in CDOM

    NASA Astrophysics Data System (ADS)

    Beltrán-Abaunza, J. M.; Kratzer, S.; Brockmann, C.

    2014-05-01

    In this study, retrievals of the medium resolution imaging spectrometer (MERIS) reflectances and water quality products using four different coastal processing algorithms freely available are assessed by comparison against sea-truthing data. The study is based on a pair-wise comparison using processor-dependent quality flags for the retrieval of valid common macro-pixels. This assessment is required in order to ensure the reliability of monitoring systems based on MERIS data, such as the Swedish coastal and lake monitoring system (http://vattenkvalitet.se). The results show that the pre-processing with the Improved Contrast between Ocean and Land (ICOL) processor, correcting for adjacency effects, improves the retrieval of spectral reflectance for all processors. Therefore, it is recommended that the ICOL processor should be applied when Baltic coastal waters are investigated. Chlorophyll was retrieved best using the FUB (Free University of Berlin) processing algorithm, although overestimations in the range 18-26.5%, dependent on the compared pairs, were obtained. At low chlorophyll concentrations (< 2.5 mg m-3), data dispersion dominated in the retrievals with the MEGS (MERIS ground segment processor) processor. The lowest bias and data dispersion were obtained with MEGS for suspended particulate matter, for which overestimations in the range of 8-16% were found. Only the FUB retrieved CDOM (coloured dissolved organic matter) correlate with in situ values. However, a large systematic underestimation appears in the estimates that nevertheless may be corrected for by using a local correction factor. The MEGS has the potential to be used as an operational processing algorithm for the Himmerfjärden bay and adjacent areas, but it requires further improvement of the atmospheric correction for the blue bands and better definition at relatively low chlorophyll concentrations in the presence of high CDOM attenuation.

  20. Analytical-Based Partial Volume Recovery in Mouse Heart Imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; deKemp, Robert A.

    2011-02-01

    Positron emission tomography (PET) is a powerful imaging modality that has the ability to yield quantitative images of tracer activity. Physical phenomena such as photon scatter, photon attenuation, random coincidences and spatial resolution limit quantification potential and must be corrected to preserve the accuracy of reconstructed images. This study focuses on correcting the partial volume effects that arise in mouse heart imaging when resolution is insufficient to resolve the true tracer distribution in the myocardium. The correction algorithm is based on fitting 1D profiles through the myocardium in gated PET images to derive myocardial contours along with blood, background and myocardial activity. This information is interpolated onto a 2D grid and convolved with the tomograph's point spread function to derive regional recovery coefficients enabling partial volume correction. The point spread function was measured by placing a line source inside a small animal PET scanner. PET simulations were created based on noise properties measured from a reconstructed PET image and on the digital MOBY phantom. The algorithm can estimate the myocardial activity to within 5% of the truth when different wall thicknesses, backgrounds and noise properties are encountered that are typical of healthy FDG mouse scans. The method also significantly improves partial volume recovery in simulated infarcted tissue. The algorithm offers a practical solution to the partial volume problem without the need for co-registered anatomic images and offers a basis for improved quantitative 3D heart imaging.

  1. Intensity inhomogeneity correction of SD-OCT data using macular flatspace.

    PubMed

    Lang, Andrew; Carass, Aaron; Jedynak, Bruno M; Solomon, Sharon D; Calabresi, Peter A; Prince, Jerry L

    2018-01-01

    Images of the retina acquired using optical coherence tomography (OCT) often suffer from intensity inhomogeneity problems that degrade both the quality of the images and the performance of automated algorithms utilized to measure structural changes. This intensity variation has many causes, including off-axis acquisition, signal attenuation, multi-frame averaging, and vignetting, making it difficult to correct the data in a fundamental way. This paper presents a method for inhomogeneity correction by acting to reduce the variability of intensities within each layer. In particular, the N3 algorithm, which is popular in neuroimage analysis, is adapted to work for OCT data. N3 works by sharpening the intensity histogram, which reduces the variation of intensities within different classes. To apply it here, the data are first converted to a standardized space called macular flat space (MFS). MFS allows the intensities within each layer to be more easily normalized by removing the natural curvature of the retina. N3 is then run on the MFS data using a modified smoothing model, which improves the efficiency of the original algorithm. We show that our method more accurately corrects gain fields on synthetic OCT data when compared to running N3 on non-flattened data. It also reduces the overall variability of the intensities within each layer, without sacrificing contrast between layers, and improves the performance of registration between OCT images. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    NASA Astrophysics Data System (ADS)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  3. Stray light correction of array spectroradiometer measurement in ultraviolet

    NASA Astrophysics Data System (ADS)

    Wu, Zhifeng; Dai, Caihong; Wang, Yanfei; Li, Ling

    2018-02-01

    For most of the array spectroradiometer, stray light is significant in UV band. Stray light correction of a UV array spectroradiometer is investigated using optical filters. If a group of filters with continuous bandpass are chosen, stray light contribution due to all the bands can be obtained using a numerical algorithm. The array spectroradiometer with the stray light corrected is used to measure the spectral irradiance of several UV lamps. The measurement results are compared to a double monochromator spectroradiometer. When xenon lamp is the array spectroradiometer calibration lamp, after stray light correction, the difference can be improved from nearly 10% to 2.0% in UVC band. When tungsten lamp is the calibration lamp, the difference can be improved from around 90% to less than 20%.

  4. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  5. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  6. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  7. Detection and correction of patient movement in prostate brachytherapy seed reconstruction

    NASA Astrophysics Data System (ADS)

    Lam, Steve T.; Cho, Paul S.; Marks, Robert J., II; Narayanan, Sreeram

    2005-05-01

    Intraoperative dosimetry of prostate brachytherapy can help optimize the dose distribution and potentially improve clinical outcome. Evaluation of dose distribution during the seed implant procedure requires the knowledge of 3D seed coordinates. Fluoroscopy-based seed localization is a viable option. From three x-ray projections obtained at different gantry angles, 3D seed positions can be determined. However, when local anaesthesia is used for prostate brachytherapy, the patient movement during fluoroscopy image capture becomes a practical problem. If uncorrected, the errors introduced by patient motion between image captures would cause seed mismatches. Subsequently, the seed reconstruction algorithm would either fail to reconstruct or yield erroneous results. We have developed an algorithm that permits detection and correction of patient movement that may occur between fluoroscopy image captures. The patient movement is decomposed into translational shifts along the tabletop and rotation about an axis perpendicular to the tabletop. The property of spatial invariance of the co-planar imaging geometry is used for lateral movement correction. Cranio-caudal movement is corrected by analysing the perspective invariance along the x-ray axis. Rotation is estimated by an iterative method. The method can detect and correct for the range of patient movement commonly seen in the clinical environment. The algorithm has been implemented for routine clinical use as the preprocessing step for seed reconstruction.

  8. Application of a novel metal artifact correction algorithm in flat-panel CT after coil embolization of brain aneurysms: intraindividual comparison.

    PubMed

    Buhk, J-H; Groth, M; Sehner, S; Fiehler, J; Schmidt, N O; Grzyska, U

    2013-09-01

    To evaluate a novel algorithm for correcting beam hardening artifacts caused by metal implants in computed tomography performed on a C-arm angiography system equipped with a flat panel (FP-CT). 16 datasets of cerebral FP-CT acquisitions after coil embolization of brain aneurysms in the context of acute subarachnoid hemorrhage have been reconstructed by applying a soft tissue kernel with and without a novel reconstruction filter for metal artifact correction. Image reading was performed in multiplanar reformations (MPR) in average mode on a dedicated radiological workplace in comparison to the preinterventional native multisection CT (MS-CT) scan serving as the anatomic gold standard. Two independent radiologists performed image scoring following a defined scale in direct comparison of the image data with and without artifact correction. For statistical analysis, a random intercept model was calculated. The inter-rater agreement was very high (ICC = 86.3 %). The soft tissue image quality and visualization of the CSF spaces at the level of the implants was substantially improved. The additional metal artifact correction algorithm did not induce impairment of the subjective image quality in any other brain regions. Adding metal artifact correction to FP-CT in an acute postinterventional setting helps to visualize the close vicinity of the aneurysm at a generally consistent image quality. © Georg Thieme Verlag KG Stuttgart · New York.

  9. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  10. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    PubMed

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3+FCM > FCM) in 2 breasts. The results of the second reading session were similar. The performance in each pairwise Wilcoxon signed-rank test is significant, showing N3+FCM superior to both N3 and FCM, and N3 superior to FCM. The performance of the new N3+FCM algorithm was comparable to that of CLIC, showing equivalent quality in 57/60 breasts. Choosing an appropriate bias field correction method is a very important preprocessing step to allow an accurate segmentation of fibroglandular tissues based on breast MRI for quantitative measurement of breast density. The proposed algorithm combining N3+FCM and CLIC both yield satisfactory results.

  11. Linear-time general decoding algorithm for the surface code

    NASA Astrophysics Data System (ADS)

    Darmawan, Andrew S.; Poulin, David

    2018-05-01

    A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.

  12. Remote sensing of suspended sediment water research: principles, methods, and progress

    NASA Astrophysics Data System (ADS)

    Shen, Ping; Zhang, Jing

    2011-12-01

    In this paper, we reviewed the principle, data, methods and steps in suspended sediment research by using remote sensing, summed up some representative models and methods, and analyzes the deficiencies of existing methods. Combined with the recent progress of remote sensing theory and application in water suspended sediment research, we introduced in some data processing methods such as atmospheric correction method, adjacent effect correction, and some intelligence algorithms such as neural networks, genetic algorithms, support vector machines into the suspended sediment inversion research, combined with other geographic information, based on Bayesian theory, we improved the suspended sediment inversion precision, and aim to give references to the related researchers.

  13. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  14. A new unequal-weighted triple-frequency first order ionosphere correction algorithm and its application in COMPASS

    NASA Astrophysics Data System (ADS)

    Liu, WenXiang; Mou, WeiHua; Wang, FeiXue

    2012-03-01

    As the introduction of triple-frequency signals in GNSS, the multi-frequency ionosphere correction technology has been fast developing. References indicate that the triple-frequency second order ionosphere correction is worse than the dual-frequency first order ionosphere correction because of the larger noise amplification factor. On the assumption that the variances of three frequency pseudoranges were equal, other references presented the triple-frequency first order ionosphere correction, which proved worse or better than the dual-frequency first order correction in different situations. In practice, the PN code rate, carrier-to-noise ratio, parameters of DLL and multipath effect of each frequency are not the same, so three frequency pseudorange variances are unequal. Under this consideration, a new unequal-weighted triple-frequency first order ionosphere correction algorithm, which minimizes the variance of the pseudorange ionosphere-free combination, is proposed in this paper. It is found that conventional dual-frequency first-order correction algorithms and the equal-weighted triple-frequency first order correction algorithm are special cases of the new algorithm. A new pseudorange variance estimation method based on the three carrier combination is also introduced. Theoretical analysis shows that the new algorithm is optimal. The experiment with COMPASS G3 satellite observations demonstrates that the ionosphere-free pseudorange combination variance of the new algorithm is smaller than traditional multi-frequency correction algorithms.

  15. A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors

    PubMed Central

    Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca

    2012-01-01

    Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642

  16. Positioning performance of the NTCM model driven by GPS Klobuchar model parameters

    NASA Astrophysics Data System (ADS)

    Hoque, Mohammed Mainul; Jakowski, Norbert; Berdermann, Jens

    2018-03-01

    Users of the Global Positioning System (GPS) utilize the Ionospheric Correction Algorithm (ICA) also known as Klobuchar model for correcting ionospheric signal delay or range error. Recently, we developed an ionosphere correction algorithm called NTCM-Klobpar model for single frequency GNSS applications. The model is driven by a parameter computed from GPS Klobuchar model and consecutively can be used instead of the GPS Klobuchar model for ionospheric corrections. In the presented work we compare the positioning solutions obtained using NTCM-Klobpar with those using the Klobuchar model. Our investigation using worldwide ground GPS data from a quiet and a perturbed ionospheric and geomagnetic activity period of 17 days each shows that the 24-hour prediction performance of the NTCM-Klobpar is better than the GPS Klobuchar model in global average. The root mean squared deviation of the 3D position errors are found to be about 0.24 and 0.45 m less for the NTCM-Klobpar compared to the GPS Klobuchar model during quiet and perturbed condition, respectively. The presented algorithm has the potential to continuously improve the accuracy of GPS single frequency mass market devices with only little software modification.

  17. Atmospheric Correction Algorithm for Hyperspectral Remote Sensing of Ocean Color from Space

    DTIC Science & Technology

    2000-02-20

    Existing atmospheric correction algorithms for multichannel remote sensing of ocean color from space were designed for retrieving water-leaving...atmospheric correction algorithm for hyperspectral remote sensing of ocean color with the near-future Coastal Ocean Imaging Spectrometer. The algorithm uses

  18. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  19. Limitations and potentials of current motif discovery algorithms

    PubMed Central

    Hu, Jianjun; Li, Bin; Kihara, Daisuke

    2005-01-01

    Computational methods for de novo identification of gene regulation elements, such as transcription factor binding sites, have proved to be useful for deciphering genetic regulatory networks. However, despite the availability of a large number of algorithms, their strengths and weaknesses are not sufficiently understood. Here, we designed a comprehensive set of performance measures and benchmarked five modern sequence-based motif discovery algorithms using large datasets generated from Escherichia coli RegulonDB. Factors that affect the prediction accuracy, scalability and reliability are characterized. It is revealed that the nucleotide and the binding site level accuracy are very low, while the motif level accuracy is relatively high, which indicates that the algorithms can usually capture at least one correct motif in an input sequence. To exploit diverse predictions from multiple runs of one or more algorithms, a consensus ensemble algorithm has been developed, which achieved 6–45% improvement over the base algorithms by increasing both the sensitivity and specificity. Our study illustrates limitations and potentials of existing sequence-based motif discovery algorithms. Taking advantage of the revealed potentials, several promising directions for further improvements are discussed. Since the sequence-based algorithms are the baseline of most of the modern motif discovery algorithms, this paper suggests substantial improvements would be possible for them. PMID:16284194

  20. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  1. Improvement of Klobuchar model for GNSS single-frequency ionospheric delay corrections

    NASA Astrophysics Data System (ADS)

    Wang, Ningbo; Yuan, Yunbin; Li, Zishen; Huo, Xingliang

    2016-04-01

    Broadcast ionospheric model is currently an effective approach to mitigate the ionospheric time delay for real-time Global Navigation Satellite System (GNSS) single-frequency users. Klobuchar coefficients transmitted in Global Positioning System (GPS) navigation message have been widely used in various GNSS positioning and navigation applications; however, this model can only reduce the ionospheric error by approximately 50% in mid-latitudes. With the emerging BeiDou and Galileo, as well as the modernization of GPS and GLONASS, more precise ionospheric correction models or algorithms are required by GNSS single-frequency users. Numerical analysis of the initial phase and nighttime term in Klobuchar algorithm demonstrates that more parameters should be introduced to better describe the variation of nighttime ionospheric total electron content (TEC). In view of this, several schemes are proposed for the improvement of Klobuchar algorithm. Performance of these improved Klobuchar-like models are validated over the continental and oceanic regions during high (2002) and low (2006) levels of solar activities, respectively. Over the continental region, GPS TEC generated from 35 International GNSS Service (IGS) and the Crust Movement Observation Network of China (CMONOC) stations are used as references. Over the oceanic region, TEC data from TOPEX/Poseidon and JASON-1 altimeters are used for comparison. A ten-parameter Klobuchar-like model, which describes the nighttime term as a linear function of geomagnetic latitude, is finally proposed for GNSS single-frequency ionospheric corrections. Compared to GPS TEC, while GPS broadcast model can correct for 55.0% and 49.5% of the ionospheric delay for the year 2002 and 2006, respectively, the proposed ten-parameter Klobuchar-like model can reduce the ionospheric error by 68.4% and 64.7% for the same period. Compared to TOPEX/Poseidon and JASON-1 TEC, the improved ten-parameter Klobuchar-like model can mitigate the ionospheric delay by 61.1% and 64.3% in 2002 and 2006, respectively.

  2. Bidirectional reflectance correction model for coastal water and its application to minimization of uncertainties in satellite and in-situ water leaving radiances at Long Island Sound Coastal Observatory site

    NASA Astrophysics Data System (ADS)

    Hlaing, Soe Min

    Ocean Color data validation is the absolute requirement to provide the steady and reliable Ocean Color data stream. In the validation of Ocean Color data, water-leaving radiances, retrieved from in situ or satellite measurements, need to be compared in very accurate manner. Both in-situ and satellite data to be used in the comparisons are required to be the representative of the typical water and environmental condition at the site without being affected by the unexpected natural and environmental perturbation. As the result, assessments of the uncertainties in the water leaving radiance data must be carried out in the measurement and the every step of data processing procedure. With the hyper- and multi-spectral water leaving radiance data retrieved for the different viewing geometries of the instruments at the Long Island Sound Coastal Observatory (LISCO), uncertainties in the water leaving radiance data and processing procedures have been assessed and quantified. Recommendations and algorithm improvements have been also made to reduce the uncertainties in the processing and validation of Ocean Color data. Particularly, remote sensing reflectance model to correct the bidirectional angular dependencies in both in-situ and satellite data have been proposed. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multi- and hyper-spectral radiometers which have different viewing geometries installed at LISCO. Match-ups and inter-comparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths, with spectral average improvement of 2.4%. LISCO's time series data has also been used to evaluate improvements in the match-up comparisons of MODIS satellite data when the proposed Bidirectional Reflectance Distribution Function (BRDF) correction is used in lieu of the current algorithm. It has been shown that the discrepancies between coincident in-situ sea-based and satellite data were decreased by 3.15% with the use of the proposed algorithm. Possibility of the application of the developed BRDF algorithm for the open ocean conditions is also considered.

  3. Analysis of MAIAC Dust Aerosol Retrievals from MODIS Over North Africa

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Wang, Y.; Hsu, C.; Torres, O.; Leptoukh, G.; Kalashnikova, O.; Korkin, S.

    2011-01-01

    An initial comparison of aerosol optical thickness over North Africa for year 2007 was performed between the Deep Blue and Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithms complimented with MISR and OMI data. The new MAIAC algorithm has a better sensitivity to the small dust storms than the DB algorithm, but it also has biases in the brightest desert regions indicating the need for improvement. The quarterly averaged AOT values in the Bodele depression and western downwind transport region show a good agreement among MAIAC, MISR and OMI data, while the DB algorithm shows a somewhat different seasonality.

  4. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omet, M.; Michizono, S.; Matsumoto, T.

    We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successfulmore » implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.« less

  6. Improving color characteristics of LCD

    NASA Astrophysics Data System (ADS)

    Feng, Xiao-fan; Daly, Scott J.

    2005-01-01

    The drive for larger size, higher spatial resolution, and wider aperture LCD has shown to increase the electrical crosstalk between electrodes in the driver circuit. This crosstalk leads to additivity errors in color LCD. In this paper, the crosstalk effect was analyzed with micrographs captured from an imaging colorimeter. The experimental result reveals the subpixel nature of color crosstalk. A spatial-based subpixel crosstalk correction algorithm was developed to improve the color performance of LCD. Compared to a 3D lookup table approach, the new algorithm is easier to implement and more accurate in performance.

  7. Parallel k-means++ for Multiple Shared-Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackey, Patrick S.; Lewis, Robert R.

    2016-09-22

    In recent years k-means++ has become a popular initialization technique for improved k-means clustering. To date, most of the work done to improve its performance has involved parallelizing algorithms that are only approximations of k-means++. In this paper we present a parallelization of the exact k-means++ algorithm, with a proof of its correctness. We develop implementations for three distinct shared-memory architectures: multicore CPU, high performance GPU, and the massively multithreaded Cray XMT platform. We demonstrate the scalability of the algorithm on each platform. In addition we present a visual approach for showing which platform performed k-means++ the fastest for varyingmore » data sizes.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  9. A Finite Element Method to Correct Deformable Image Registration Errors in Low-Contrast Regions

    PubMed Central

    Zhong, Hualiang; Kim, Jinkoo; Li, Haisen; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.

    2012-01-01

    Image-guided adaptive radiotherapy requires deformable image registration to map radiation dose back and forth between images. The purpose of this study is to develop a novel method to improve the accuracy of an intensity-based image registration algorithm in low-contrast regions. A computational framework has been developed in this study to improve the quality of the “demons” registration. For each voxel in the registration’s target image, the standard deviation of image intensity in a neighborhood of this voxel was calculated. A mask for high-contrast regions was generated based on their standard deviations. In the masked regions, a tetrahedral mesh was refined recursively so that a sufficient number of tetrahedral nodes in these regions can be selected as driving nodes. An elastic system driven by the displacements of the selected nodes was formulated using a finite element method (FEM) and implemented on the refined mesh. The displacements of these driving nodes were generated with the “demons” algorithm. The solution of the system was derived using a conjugated gradient method, and interpolated to generate a displacement vector field for the registered images. The FEM correction method was compared with the “demons” algorithm on the CT images of lung and prostate patients. The performance of the FEM correction relating to the “demons” registration was analyzed based on the physical property of their deformation maps, and quantitatively evaluated through a benchmark model developed specifically for this study. Compared to the benchmark model, the “demons” registration has the maximum error of 1.2 cm, which can be corrected by the FEM method to 0.4 cm, and the average error of the “demons” registration is reduced from 0.17 cm to 0.11 cm. For the CT images of lung and prostate patients, the deformation maps generated by the “demons” algorithm were found unrealistic at several places. In these places, the displacement differences between the “demons” registrations and their FEM corrections were found in the range of 0.4 cm and 1.1cm. The mesh refinement and FEM simulation were implemented in a single thread application which requires about 45 minutes of computation time on a 2.6 GH computer. This study has demonstrated that the finite element method can be integrated with intensity-based image registration algorithms to improve their registration accuracy, especially in low-contrast regions. PMID:22581269

  10. Specimen charging in X-ray absorption spectroscopy: correction of total electron yield data from stabilized zirconia in the energy range 250-915 eV.

    PubMed

    Vlachos, Dimitrios; Craven, Alan J; McComb, David W

    2005-03-01

    The effects of specimen charging on X-ray absorption spectroscopy using total electron yield have been investigated using powder samples of zirconia stabilized by a range of oxides. The stabilized zirconia powder was mixed with graphite to minimize the charging but significant modifications of the intensities of features in the X-ray absorption near-edge fine structure (XANES) still occurred. The time dependence of the charging was measured experimentally using a time scan, and an algorithm was developed to use this measured time dependence to correct the effects of the charging. The algorithm assumes that the system approaches the equilibrium state by an exponential decay. The corrected XANES show improved agreement with the electron energy-loss near-edge fine structure obtained from the same samples.

  11. Cone-beam CT image contrast and attenuation-map linearity improvement (CALI) for brain stereotactic radiosurgery procedures

    NASA Astrophysics Data System (ADS)

    Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark

    2017-03-01

    A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.

  12. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  13. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  14. A faster 1.375-approximation algorithm for sorting by transpositions.

    PubMed

    Cunha, Luís Felipe I; Kowada, Luis Antonio B; Hausen, Rodrigo de A; de Figueiredo, Celina M H

    2015-11-01

    Sorting by Transpositions is an NP-hard problem for which several polynomial-time approximation algorithms have been developed. Hartman and Shamir (2006) developed a 1.5-approximation [Formula: see text] algorithm, whose running time was improved to O(nlogn) by Feng and Zhu (2007) with a data structure they defined, the permutation tree. Elias and Hartman (2006) developed a 1.375-approximation O(n(2)) algorithm, and Firoz et al. (2011) claimed an improvement to the running time, from O(n(2)) to O(nlogn), by using the permutation tree. We provide counter-examples to the correctness of Firoz et al.'s strategy, showing that it is not possible to reach a component by sufficient extensions using the method proposed by them. In addition, we propose a 1.375-approximation algorithm, modifying Elias and Hartman's approach with the use of permutation trees and achieving O(nlogn) time.

  15. A positional misalignment correction method for Fourier ptychographic microscopy based on simulated annealing

    NASA Astrophysics Data System (ADS)

    Sun, Jiasong; Zhang, Yuzhen; Chen, Qian; Zuo, Chao

    2017-02-01

    Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illuminations and a phase retrieval algorithm to surpass the diffraction limit of a low numerical aperture (NA) objective lens. In current FPM imaging platforms, accurate knowledge of LED matrix's position is critical to achieve good recovery quality. Furthermore, considering such a wide field-of-view (FOV) in FPM, different regions in the FOV have different sensitivity of LED positional misalignment. In this work, we introduce an iterative method to correct position errors based on the simulated annealing (SA) algorithm. To improve the efficiency of this correcting process, large number of iterations for several images with low illumination NAs are firstly implemented to estimate the initial values of the global positional misalignment model through non-linear regression. Simulation and experimental results are presented to evaluate the performance of the proposed method and it is demonstrated that this method can both improve the quality of the recovered object image and relax the LED elements' position accuracy requirement while aligning the FPM imaging platforms.

  16. Retrieval of Aerosol Optical Depth Under Thin Cirrus from MODIS: Application to an Ocean Algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Jaehwa; Hsu, Nai-Yung Christina; Sayer, Andrew Mark; Bettenhausen, Corey

    2013-01-01

    A strategy for retrieving aerosol optical depth (AOD) under conditions of thin cirrus coverage from the Moderate Resolution Imaging Spectroradiometer (MODIS) is presented. We adopt an empirical method that derives the cirrus contribution to measured reflectance in seven bands from the visible to shortwave infrared (0.47, 0.55, 0.65, 0.86, 1.24, 1.63, and 2.12 µm, commonly used for AOD retrievals) by using the correlations between the top-of-atmosphere (TOA) reflectance at 1.38 micron and these bands. The 1.38 micron band is used due to its strong absorption by water vapor and allows us to extract the contribution of cirrus clouds to TOA reflectance and create cirrus-corrected TOA reflectances in the seven bands of interest. These cirrus-corrected TOA reflectances are then used in the aerosol retrieval algorithm to determine cirrus-corrected AOD. The cirrus correction algorithm reduces the cirrus contamination in the AOD data as shown by a decrease in both magnitude and spatial variability of AOD over areas contaminated by thin cirrus. Comparisons of retrieved AOD against Aerosol Robotic Network observations at Nauru in the equatorial Pacific reveal that the cirrus correction procedure improves the data quality: the percentage of data within the expected error +/-(0.03 + 0.05 ×AOD) increases from 40% to 80% for cirrus-corrected points only and from 80% to 86% for all points (i.e., both corrected and uncorrected retrievals). Statistical comparisons with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) retrievals are also carried out. A high correlation (R = 0.89) between the CALIOP cirrus optical depth and AOD correction magnitude suggests potential applicability of the cirrus correction procedure to other MODIS-like sensors.

  17. Parameter Sensitivity Study of the Wall Interference Correction System (WICS)

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Everhart, Joel L.; Iyer, Venkit

    2001-01-01

    An off-line version of the Wall Interference Correction System (WICS) has been implemented for the "NASA Langley National Transonic Facility. The correction capability is currently restricted to corrections for solid wall interference in the model pitch plane for Mach numbers, less than 0.45 due to a limitation in tunnel calibration data. A study to assess output sensitivity to the aerodynamic parameters of Reynolds number and Mach number was conducted on this code to further ensure quality during the correction process. In addition, this paper includes all investigation into possible correction due to a semispan test technique using a non metric standoff and all improvement to the standard data rejection algorithm.

  18. An improved weakly compressible SPH method for simulating free surface flows of viscous and viscoelastic fluids

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyang; Deng, Xiao-Long

    2016-04-01

    In this paper, an improved weakly compressible smoothed particle hydrodynamics (SPH) method is proposed to simulate transient free surface flows of viscous and viscoelastic fluids. The improved SPH algorithm includes the implementation of (i) the mixed symmetric correction of kernel gradient to improve the accuracy and stability of traditional SPH method and (ii) the Rusanov flux in the continuity equation for improving the computation of pressure distributions in the dynamics of liquids. To assess the effectiveness of the improved SPH algorithm, a number of numerical examples including the stretching of an initially circular water drop, dam breaking flow against a vertical wall, the impact of viscous and viscoelastic fluid drop with a rigid wall, and the extrudate swell of viscoelastic fluid have been presented and compared with available numerical and experimental data in literature. The convergent behavior of the improved SPH algorithm has also been studied by using different number of particles. All numerical results demonstrate that the improved SPH algorithm proposed here is capable of modeling free surface flows of viscous and viscoelastic fluids accurately and stably, and even more important, also computing an accurate and little oscillatory pressure field.

  19. A soft decoding algorithm and hardware implementation for the visual prosthesis based on high order soft demodulation.

    PubMed

    Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei

    2016-09-26

    High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.

  20. Differential and relaxed image foresting transform for graph-cut segmentation of multiple 3D objects.

    PubMed

    Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K

    2014-01-01

    Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system.

  1. An evaluation of the signature extension approach to large area crop inventories utilizing space image data. [Kansas and North Dakota

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Cicone, R. C.; Stinson, J. L.; Balon, R. J.

    1977-01-01

    The author has identified the following significant results. Two examples of haze correction algorithms were tested: CROP-A and XSTAR. The CROP-A was tested in a unitemporal mode on data collected in 1973-74 over ten sample segments in Kansas. Because of the uniformly low level of haze present in these segments, no conclusion could be reached about CROP-A's ability to compensate for haze. It was noted, however, that in some cases CROP-A made serious errors which actually degraded classification performance. The haze correction algorithm XSTAR was tested in a multitemporal mode on 1975-76 LACIE sample segment data over 23 blind sites in Kansas and 18 sample segments in North Dakota, providing wide range of haze levels and other conditions for algorithm evaluation. It was found that this algorithm substantially improved signature extension classification accuracy when a sum-of-likelihoods classifier was used with an alien rejection threshold.

  2. Formally Verified Practical Algorithms for Recovery from Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Munoz, Caesar A.

    2009-01-01

    In this paper, we develop and formally verify practical algorithms for recovery from loss of separation. The formal verification is performed in the context of a criteria-based framework. This framework provides rigorous definitions of horizontal and vertical maneuver correctness that guarantee divergence and achieve horizontal and vertical separation. The algorithms are shown to be independently correct, that is, separation is achieved when only one aircraft maneuvers, and implicitly coordinated, that is, separation is also achieved when both aircraft maneuver. In this paper we improve the horizontal criteria over our previous work. An important benefit of the criteria approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).

  3. Blind equalization and automatic modulation classification based on subspace for subcarrier MPSK optical communications

    NASA Astrophysics Data System (ADS)

    Chen, Dan; Guo, Lin-yuan; Wang, Chen-hao; Ke, Xi-zheng

    2017-07-01

    Equalization can compensate channel distortion caused by channel multipath effects, and effectively improve convergent of modulation constellation diagram in optical wireless system. In this paper, the subspace blind equalization algorithm is used to preprocess M-ary phase shift keying (MPSK) subcarrier modulation signal in receiver. Mountain clustering is adopted to get the clustering centers of MPSK modulation constellation diagram, and the modulation order is automatically identified through the k-nearest neighbor (KNN) classifier. The experiment has been done under four different weather conditions. Experimental results show that the convergent of constellation diagram is improved effectively after using the subspace blind equalization algorithm, which means that the accuracy of modulation recognition is increased. The correct recognition rate of 16PSK can be up to 85% in any kind of weather condition which is mentioned in paper. Meanwhile, the correct recognition rate is the highest in cloudy and the lowest in heavy rain condition.

  4. Processing of Cryo-EM Movie Data.

    PubMed

    Ripstein, Z A; Rubinstein, J L

    2016-01-01

    Direct detector device (DDD) cameras dramatically enhance the capabilities of electron cryomicroscopy (cryo-EM) due to their improved detective quantum efficiency (DQE) relative to other detectors. DDDs use semiconductor technology that allows micrographs to be recorded as movies rather than integrated individual exposures. Movies from DDDs improve cryo-EM in another, more surprising, way. DDD movies revealed beam-induced specimen movement as a major source of image degradation and provide a way to partially correct the problem by aligning frames or regions of frames to account for this specimen movement. In this chapter, we use a self-consistent mathematical notation to explain, compare, and contrast several of the most popular existing algorithms for computationally correcting specimen movement in DDD movies. We conclude by discussing future developments in algorithms for processing DDD movies that would extend the capabilities of cryo-EM even further. © 2016 Elsevier Inc. All rights reserved.

  5. Alignment of cryo-EM movies of individual particles by optimization of image translations.

    PubMed

    Rubinstein, John L; Brubaker, Marcus A

    2015-11-01

    Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual <1 MDa protein particle trajectories to be estimated, but requires rolling averages to be calculated from frames and fits linear trajectories for particles. Here we describe an algorithm that allows for individual <1 MDa particle images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Motion correction for improving the accuracy of dual-energy myocardial perfusion CT imaging

    NASA Astrophysics Data System (ADS)

    Pack, Jed D.; Yin, Zhye; Xiong, Guanglei; Mittal, Priya; Dunham, Simon; Elmore, Kimberly; Edic, Peter M.; Min, James K.

    2016-03-01

    Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.

  7. The Effect of Underwater Imagery Radiometry on 3d Reconstruction and Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Drakonakis, G. I.; Georgopoulos, A.; Skarlatos, D.

    2017-02-01

    The work presented in this paper investigates the effect of the radiometry of the underwater imagery on automating the 3D reconstruction and the produced orthoimagery. Main aim is to investigate whether pre-processing of the underwater imagery improves the 3D reconstruction using automated SfM - MVS software or not. Since the processing of images either separately or in batch is a time-consuming procedure, it is critical to determine the necessity of implementing colour correction and enhancement before the SfM - MVS procedure or directly to the final orthoimage when the orthoimagery is the deliverable. Two different test sites were used to capture imagery ensuring different environmental conditions, depth and complexity. Three different image correction methods are applied: A very simple automated method using Adobe Photoshop, a developed colour correction algorithm using the CLAHE (Zuiderveld, 1994) method and an implementation of the algorithm described in Bianco et al., (2015). The produced point clouds using the initial and the corrected imagery are then being compared and evaluated.

  8. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    PubMed

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  9. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm

    PubMed Central

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-01-01

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions. PMID:28925979

  10. Application of majority voting and consensus voting algorithms in N-version software

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.

    2018-05-01

    N-version programming is one of the most common techniques which is used to improve the reliability of software by building in fault tolerance, redundancy and decreasing common cause failures. N different equivalent software versions are developed by N different and isolated workgroups by considering the same software specifications. The versions solve the same task and return results that have to be compared to determine the correct result. Decisions of N different versions are evaluated by a voting algorithm or the so-called voter. In this paper, two of the most commonly used software voting algorithms such as the majority voting algorithm and the consensus voting algorithm are studied. The distinctive features of Nversion programming with majority voting and N-version programming with consensus voting are described. These two algorithms make a decision about the correct result on the base of the agreement matrix. However, if the equivalence relation on the agreement matrix is not satisfied it is impossible to make a decision. It is shown that the agreement matrix can be transformed into an appropriate form by using the Boolean compositions when the equivalence relation is satisfied.

  11. Smoothed Particle Hydrodynamics: Applications Within DSTO

    DTIC Science & Technology

    2006-10-01

    Most SPH codes use either an improved Euler method (a mid-point predictor - corrector method) [50] or a leapfrog predictor - corrector algorithm for...in the next section we used the predictor - corrector leapfrog algorithm for time stepping. If we write the set of equations describing the change in... predictor - corrector or leapfrog method is used when solving the equations. Monaghan has also noted [53] that, with a correctly chosen time step, total

  12. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  13. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  14. FORTRAN program for analyzing ground-based radar data: Usage and derivations, version 6.2

    NASA Technical Reports Server (NTRS)

    Haering, Edward A., Jr.; Whitmore, Stephen A.

    1995-01-01

    A postflight FORTRAN program called 'radar' reads and analyzes ground-based radar data. The output includes position, velocity, and acceleration parameters. Air data parameters are also provided if atmospheric characteristics are input. This program can read data from any radar in three formats. Geocentric Cartesian position can also be used as input, which may be from an inertial navigation or Global Positioning System. Options include spike removal, data filtering, and atmospheric refraction corrections. Atmospheric refraction can be corrected using the quick White Sands method or the gradient refraction method, which allows accurate analysis of very low elevation angle and long-range data. Refraction properties are extrapolated from surface conditions, or a measured profile may be input. Velocity is determined by differentiating position. Accelerations are determined by differentiating velocity. This paper describes the algorithms used, gives the operational details, and discusses the limitations and errors of the program. Appendices A through E contain the derivations for these algorithms. These derivations include an improvement in speed to the exact solution for geodetic altitude, an improved algorithm over earlier versions for determining scale height, a truncation algorithm for speeding up the gradient refraction method, and a refinement of the coefficients used in the White Sands method for Edwards AFB, California. Appendix G contains the nomenclature.

  15. Transport implementation of the Bernstein-Vazirani algorithm with ion qubits

    NASA Astrophysics Data System (ADS)

    Fallek, S. D.; Herold, C. D.; McMahon, B. J.; Maller, K. M.; Brown, K. R.; Amini, J. M.

    2016-08-01

    Using trapped ion quantum bits in a scalable microfabricated surface trap, we perform the Bernstein-Vazirani algorithm. Our architecture takes advantage of the ion transport capabilities of such a trap. The algorithm is demonstrated using two- and three-ion chains. For three ions, an improvement is achieved compared to a classical system using the same number of oracle queries. For two ions and one query, we correctly determine an unknown bit string with probability 97.6(8)%. For three ions, we succeed with probability 80.9(3)%.

  16. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    NASA Astrophysics Data System (ADS)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  17. A model-based scatter artifacts correction for cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Wei; Zhu, Jun; Wang, Luyao

    2016-04-15

    Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less

  18. Nasal Base Retraction: A Treatment Algorithm.

    PubMed

    Tas, Süleyman; Colakoglu, Salih; Lee, Bernard Travis

    2017-06-01

    Nasal base retraction results from cephalic malposition of the alar base in the vertical plane, which causes disharmony of the alar base with the rest of the nose structures. Correcting nasal base retraction is very important for improved aesthetic outcomes; however, there is a limited body of literature about this deformity and its treatment. Create a nasal base retraction treatment algorithm based on a severity classification system. This is a retrospective case review study of 53 patients who underwent rhinoplasty with correction of alar base retraction by the senior author (S.T.). The minimum follow-up time was 6 months. Levator labii alaque nasi muscle dissection or alar base release with or without a rim graft on the effected side were performed based on the severity of the alar base retraction. Aesthetic results were assessed with objective grading of preoperative and postoperative patient photographs by two independent plastic surgeons. Functional improvement was assessed with patient self-evaluations of nasal patency. Also, a rhinoplasty outcomes evaluation (ROE) questionnaire was distributed to patients. Comparison of preoperative and postoperative photographs demonstrated that nasal base asymmetry was significantly improved in all cases, and 85% of the patients had complete symmetry. Nasal obstruction was also significantly reduced after surgery (P < 0.001). The majority of patients reported satisfaction (92.5%), with an ROE total score greater than or equal to 20. New techniques and a treatment algorithm for correcting nasal base retraction deformities that will help rhinoplasty surgeons obtain aesthetically and functionally pleasing outcomes for patients. © 2017 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com

  19. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  20. Improved Global Ocean Color Using Polymer Algorithm

    NASA Astrophysics Data System (ADS)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  1. A hand tracking algorithm with particle filter and improved GVF snake model

    NASA Astrophysics Data System (ADS)

    Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe

    2017-07-01

    To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.

  2. FPGA-based Klystron linearization implementations in scope of ILC

    DOE PAGES

    Omet, M.; Michizono, S.; Matsumoto, T.; ...

    2015-01-23

    We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successfulmore » implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.« less

  3. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  4. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  5. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  6. Communications protocol

    NASA Technical Reports Server (NTRS)

    Zhou, Xiaoming (Inventor); Baras, John S. (Inventor)

    2010-01-01

    The present invention relates to an improved communications protocol which increases the efficiency of transmission in return channels on a multi-channel slotted Alohas system by incorporating advanced error correction algorithms, selective retransmission protocols and the use of reserved channels to satisfy the retransmission requests.

  7. Predicting mining activity with parallel genetic algorithms

    USGS Publications Warehouse

    Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,

    2005-01-01

    We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.

  8. Using trend templates in a neonatal seizure algorithm improves detection of short seizures in a foetal ovine model.

    PubMed

    Zwanenburg, Alex; Andriessen, Peter; Jellema, Reint K; Niemarkt, Hendrik J; Wolfs, Tim G A M; Kramer, Boris W; Delhaas, Tammo

    2015-03-01

    Seizures below one minute in duration are difficult to assess correctly using seizure detection algorithms. We aimed to improve neonatal detection algorithm performance for short seizures through the use of trend templates for seizure onset and end. Bipolar EEG were recorded within a transiently asphyxiated ovine model at 0.7 gestational age, a common experimental model for studying brain development in humans of 30-34 weeks of gestation. Transient asphyxia led to electrographic seizures within 6-8 h. A total of 3159 seizures, 2386 shorter than one minute, were annotated in 1976 h-long EEG recordings from 17 foetal lambs. To capture EEG characteristics, five features, sensitive to seizures, were calculated and used to derive trend information. Feature values and trend information were used as input for support vector machine classification and subsequently post-processed. Performance metrics, calculated after post-processing, were compared between analyses with and without employing trend information. Detector performance was assessed after five-fold cross-validation conducted ten times with random splits. The use of trend templates for seizure onset and end in a neonatal seizure detection algorithm significantly improves the correct detection of short seizures using two-channel EEG recordings from 54.3% (52.6-56.1) to 59.5% (58.5-59.9) at FDR 2.0 (median (range); p < 0.001, Wilcoxon signed rank test). Using trend templates might therefore aid in detection of short seizures by EEG monitoring at the NICU.

  9. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  10. The DOHA algorithm: a new recipe for cotrending large-scale transiting exoplanet survey light curves

    NASA Astrophysics Data System (ADS)

    Mislis, D.; Pyrzas, S.; Alsubai, K. A.; Tsvetanov, Z. I.; Vilchez, N. P. E.

    2017-03-01

    We present DOHA, a new algorithm for cotrending photometric light curves obtained by transiting exoplanet surveys. The algorithm employs a novel approach to the traditional 'differential photometry' technique, by selecting the most suitable comparison star for each target light curve, using a two-step correlation search. Extensive tests on real data reveal that DOHA corrects both intra-night variations and long-term systematics affecting the data. Statistical studies conducted on a sample of ∼9500 light curves from the Qatar Exoplanet Survey reveal that DOHA-corrected light curves show an rms improvement of a factor of ∼2, compared to the raw light curves. In addition, we show that the transit detection probability in our sample can increase considerably, even up to a factor of 7, after applying DOHA.

  11. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  12. Multifocal multiphoton microscopy with adaptive optical correction

    NASA Astrophysics Data System (ADS)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  13. Multi-scale graph-cut algorithm for efficient water-fat separation.

    PubMed

    Berglund, Johan; Skorpil, Mikael

    2017-09-01

    To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  15. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  16. Diagnosis of paediatric HIV infection in a primary health care setting with a clinical algorithm.

    PubMed Central

    Horwood, C.; Liebeschuetz, S.; Blaauw, D.; Cassol, S.; Qazi, S.

    2003-01-01

    OBJECTIVE: To determine the validity of an algorithm used by primary care health workers to identify children with symptomatic human immunodeficiency virus (HIV) infection. This HIV algorithm is being implemented in South Africa as part of the Integrated Management of Childhood Illness (IMCI), a strategy that aims to improve childhood morbidity and mortality by improving care at the primary care level. As AIDS is a leading cause of death in children in southern Africa, diagnosis and management of symptomatic HIV infection was added to the existing IMCI algorithm. METHODS: In total, 690 children who attended the outpatients department in a district hospital in South Africa were assessed with the HIV algorithm and by a paediatrician. All children were then tested for HIV viral load. The validity of the algorithm in detecting symptomatic HIV was compared with clinical diagnosis by a paediatrician and the result of an HIV test. Detailed clinical data were used to improve the algorithm. FINDINGS: Overall, 198 (28.7%) enrolled children were infected with HIV. The paediatrician correctly identified 142 (71.7%) children infected with HIV, whereas the IMCI/HIV algorithm identified 111 (56.1%). Odds ratios were calculated to identify predictors of HIV infection and used to develop an improved HIV algorithm that is 67.2% sensitive and 81.5% specific in clinically detecting HIV infection. CONCLUSIONS: Children with symptomatic HIV infection can be identified effectively by primary level health workers through the use of an algorithm. The improved HIV algorithm developed in this study could be used by countries with high prevalences of HIV to enable IMCI practitioners to identify and care for HIV-infected children. PMID:14997238

  17. SU-F-P-56: On a New Approach to Reconstruct the Patient Dose From Phantom Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangtsson, E; Vries, W de

    Purpose: The development of complex radiation treatment schemes emphasizes the need for advanced QA analysis methods to ensure patient safety. One such tool is the Delta4 DVH Anatomy software, where the patient dose is reconstructed from phantom measurements. Deviations in the measured dose are transferred to the patient anatomy and their clinical impact is evaluated in situ. Results from the original algorithm revealed weaknesses that may introduce artefacts in the reconstructed dose. These can lead to false negatives or obscure the effects of minor dose deviations from delivery failures. Here, we will present results from a new patient dose reconstructionmore » algorithm. Methods: The main steps of the new algorithm are: (1) the dose delivered to a phantom is measured in a number of detector positions. (2) The measured dose is compared to an internally calculated dose distribution evaluated in said positions. The so-obtained dose difference is (3) used to calculate an energy fluence difference. This entity is (4) used as input to a patient dose correction calculation routine. Finally, the patient dose is reconstructed by adding said patient dose correction to the planned patient dose. The internal dose calculation in step (2) and (4) is based on the Pencil Beam algorithm. Results: The new patient dose reconstruction algorithm have been tested on a number of patients and the standard metrics dose deviation (DDev), distance-to-agreement (DTA) and Gamma index are improved when compared to the original algorithm. In a certain case the Gamma index (3%/3mm) increases from 72.9% to 96.6%. Conclusion: The patient dose reconstruction algorithm is improved. This leads to a reduction in non-physical artefacts in the reconstructed patient dose. As a consequence, the possibility to detect deviations in the dose that is delivered to the patient is improved. An increase in Gamma index for the PTV can be seen. The corresponding author is an employee of ScandiDos.« less

  18. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  19. Correcting for possible tissue distortion between provocation and assessment in skin testing: the divergent beam UVB photo-test.

    PubMed

    O'Doherty, Jim; Henricson, Joakim; Falk, Magnus; Anderson, Chris D

    2013-11-01

    In tissue viability imaging (TiVi), an assessment method for skin erythema, correct orientation of skin position from provocation to assessment optimizes data interpretation. Image processing algorithms could compensate for the effects of skin translation, torsion and rotation realigning assessment images to the position of the skin at provocation. A reference image of a divergent, UVB phototest was acquired, as well as test images at varying levels of translation, rotation and torsion. Using 12 skin markers, an algorithm was applied to restore the distorted test images to the reference image. The algorithm corrected torsion and rotation up to approximately 35 degrees. The radius of the erythemal reaction and average value of the input image closely matched that of the reference image's 'true value'. The image 'de-warping' procedure improves the robustness of the response image evaluation in a clinical research setting and opens the possibility of the correction of possibly flawed images performed away from the laboratory setting by the subject/patient themselves. This opportunity may increase the use of photo-testing and, by extension, other late response skin testing where the necessity of a return assessment visit is a disincentive to performance of the test. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose

    PubMed Central

    Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-01-01

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910

  1. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.

    PubMed

    Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-09-12

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.

  2. Development and Positioning Accuracy Assessment of Single-Frequency Precise Point Positioning Algorithms by Combining GPS Code-Pseudorange Measurements with Real-Time SSR Corrections

    PubMed Central

    Kim, Miso; Park, Kwan-Dong

    2017-01-01

    We have developed a suite of real-time precise point positioning programs to process GPS pseudorange observables, and validated their performance through static and kinematic positioning tests. To correct inaccurate broadcast orbits and clocks, and account for signal delays occurring from the ionosphere and troposphere, we applied State Space Representation (SSR) error corrections provided by the Seoul Broadcasting System (SBS) in South Korea. Site displacements due to solid earth tide loading are also considered for the purpose of improving the positioning accuracy, particularly in the height direction. When the developed algorithm was tested under static positioning, Kalman-filtered solutions produced a root-mean-square error (RMSE) of 0.32 and 0.40 m in the horizontal and vertical directions, respectively. For the moving platform, the RMSE was found to be 0.53 and 0.69 m in the horizontal and vertical directions. PMID:28598403

  3. Improving chlorophyll-a retrievals and cross-sensor consistency through the OCI algorithm concept

    NASA Astrophysics Data System (ADS)

    Feng, L.; Hu, C.; Lee, Z.; Franz, B. A.

    2016-02-01

    Abstract: The recently developed band-subtraction based OCI chlorophyll-a algorithm is more tolerant than the band-ratio OCx algorithms to errors from atmospheric correction and other sources in oligotrophic oceans (Chl ≤ 0.25 mg m-3), and it has been implemented by NASA as the default algorithm to produce global Chl data from all ocean color missions. However, two areas still require improvements in its current implementation. Firstly, the originally proposed algorithm switch between oligotrophic and more productive waters has been changed from 0.25 - 0.3 mg m-3 to 0.15 - 0.2 mg m-3 to account for the observed discontinuity in data statistics. Additionally, the algorithm does not account for variable proportions of colored dissolved organic matter (CDOM) in different ocean basins. Here, new step-wise regression equations with fine-tuned regression coefficients are used to improve raise the algorithm switch zone and to improve data statistics as well as retrieval accuracy. A new CDOM index (CDI) based on three spectral bands (412, 443 and 490 nm) is used as a weighting factor to adjust the algorithm for the optical disparities between different oceans. The updated Chl OCI algorithm is then evaluated for its overall accuracy using field observations through the SeaBASS data archive, and for its cross-sensor consistency using multi-sensor observations over the global oceans. Keywords: Chlorophyll-a, Remote sensing, Ocean color, OCI, OCx, CDOM, MODIS, SeaWiFS, VIIRS

  4. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    NASA Astrophysics Data System (ADS)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  5. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology.

    PubMed

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-13

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  6. An advanced retrieval algorithm for greenhouse gases using polarization information measured by GOSAT TANSO-FTS SWIR I: Simulation study

    NASA Astrophysics Data System (ADS)

    Kikuchi, N.; Yoshida, Y.; Uchino, O.; Morino, I.; Yokota, T.

    2016-11-01

    We present an algorithm for retrieving column-averaged dry air mole fraction of carbon dioxide (XCO2) and methane (XCH4) from reflected spectra in the shortwave infrared (SWIR) measured by the TANSO-FTS (Thermal And Near infrared Sensor for carbon Observation Fourier Transform Spectrometer) sensor on board the Greenhouse gases Observing SATellite (GOSAT). The algorithm uses the two linear polarizations observed by TANSO-FTS to improve corrections to the interference effects of atmospheric aerosols, which degrade the accuracy in the retrieved greenhouse gas concentrations. To account for polarization by the land surface reflection in the forward model, we introduced a bidirectional reflection matrix model that has two parameters to be retrieved simultaneously with other state parameters. The accuracy in XCO2 and XCH4 values retrieved with the algorithm was evaluated by using simulated retrievals over both land and ocean, focusing on the capability of the algorithm to correct imperfect prior knowledge of aerosols. To do this, we first generated simulated TANSO-FTS spectra using a global distribution of aerosols computed by the aerosol transport model SPRINTARS. Then the simulated spectra were submitted to the algorithms as measurements both with and without polarization information, adopting a priori profiles of aerosols that differ from the true profiles. We found that the accuracy of XCO2 and XCH4, as well as profiles of aerosols, retrieved with polarization information was considerably improved over values retrieved without polarization information, for simulated observations over land with aerosol optical thickness greater than 0.1 at 1.6 μm.

  7. Aerosol Correction for Remotely Sensed Sea Surface Temperatures From the NOAA AVHRR: Phase II

    NASA Astrophysics Data System (ADS)

    Nalli, N. R.; Ignatov, A.

    2002-05-01

    For over two decades, the National Oceanic and Atmospheric Administration (NOAA) has produced global retrievals of sea surface temperature (SST) using infrared (IR) data from the Advanced Very High Resolution Radiometer (AVHRR). The standard multichannel retrieval algorithms are derived from regression analyses of AVHRR window channel brightness temperatures against in situ buoy measurements under non-cloudy conditions thus providing a correction for IR attenuation due to molecular water vapor absorption. However, for atmospheric conditions with elevated aerosol levels (e.g., arising from dust, biomass burning and volcanic eruptions), such algorithms lead to significant negative biases in SST because of IR attenuation arising from aerosol absorption and scattering. This research presents the development of a 2nd-phase aerosol correction algorithm for daytime AVHRR SST. To accomplish this, a long-term (1990-1998), global AVHRR-buoy matchup database was created by merging the Pathfinder Atmospheres (PATMOS) and Oceans (PFMDB) data sets. The merged data are unique in that they include multi-year, global daytime estimates of aerosol optical depth (AOD) derived from AVHRR channels 1 and 2 (0.63 and 0.83 μ m, respectively), along with an effective Angstrom exponent derived from the AOD retrievals (Ignatov and Nalli, 2002). Recent enhancements in the aerosol data constitute an improvement over the Phase I algorithm (Nalli and Stowe, 2002) which relied only on channel 1 AOD and the ratio of normalized reflectance from channels 1 and 2. The Angstrom exponent and channel 2 AOD provide important statistical information about the particle size distribution of the aerosol. The SST bias can be parametrically expressed as a function of observed AVHRR channels 1 and 2 slant-path AOD, normalized reflectance ratio and the Angstrom exponent. Based upon these empirical relationships, aerosol correction equations are then derived for the daytime multichannel and nonlinear SST (MCSST and NLSST) algorithms. Separate sets of coefficients are utilized for two aerosol modes, these being stratospheric/tropospheric (e.g., volcanic aerosol) and tropospheric (e.g., dust, smoke). The algorithms are subsequently applied to retrospective PATMOS data to demonstrate the potential for climate applications. The minimization of cold biases in the AVHRR SST, as demonstrated in this work, should improve its overall utility for the general user community.

  8. Correcting infrared satellite estimates of sea surface temperature for atmospheric water vapor attenuation

    NASA Technical Reports Server (NTRS)

    Emery, William J.; Yu, Yunyue; Wick, Gary A.; Schluessel, Peter; Reynolds, Richard W.

    1994-01-01

    A new satellite sea surface temperature (SST) algorithm is developed that uses nearly coincident measurements from the microwave special sensor microwave imager (SSM/I) to correct for atmospheric moisture attenuation of the infrared signal from the advanced very high resolution radiometer (AVHRR). This new SST algorithm is applied to AVHRR imagery from the South Pacific and Norwegian seas, which are then compared with simultaneous in situ (ship based) measurements of both skin and bulk SST. In addition, an SST algorithm using a quadratic product of the difference between the two AVHRR thermal infrared channels is compared with the in situ measurements. While the quadratic formulation provides a considerable improvement over the older cross product (CPSST) and multichannel (MCSST) algorithms, the SSM/I corrected SST (called the water vapor or WVSST) shows overall smaller errors when compared to both the skin and bulk in situ SST observations. Applied to individual AVHRR images, the WVSST reveals an SST difference pattern (CPSST-WVSST) similar in shape to the water vapor structure while the CPSST-quadratic SST difference appears unrelated in pattern to the nearly coincident water vapor pattern. An application of the WVSST to week-long composites of global area coverage (GAC) AVHRR data demonstrates again the manner in which the WVSST corrects the AVHRR for atmospheric moisture attenuation. By comparison the quadratic SST method underestimates the SST corrections in the lower latitudes and overestimates the SST in th e higher latitudes. Correlations between the AVHRR thermal channel differences and the SSM/I water vapor demonstrate the inability of the channel difference to represent water vapor in the midlatitude and high latitudes during summer. Compared against drifting buoy data the WVSST and the quadratic SST both exhibit the same general behavior with the relatively small differences with the buoy temperatures.

  9. An improved parallel fuzzy connected image segmentation method based on CUDA.

    PubMed

    Wang, Liansheng; Li, Dong; Huang, Shaohui

    2016-05-12

    Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.

  10. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  11. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    PubMed

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  12. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  13. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.

    PubMed

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-11-07

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  14. Qualitative and quantitative evaluation of six algorithms for correcting intensity nonuniformity effects.

    PubMed

    Arnold, J B; Liow, J S; Schaper, K A; Stern, J J; Sled, J G; Shattuck, D W; Worth, A J; Cohen, M S; Leahy, R M; Mazziotta, J C; Rottenberg, D A

    2001-05-01

    The desire to correct intensity nonuniformity in magnetic resonance images has led to the proliferation of nonuniformity-correction (NUC) algorithms with different theoretical underpinnings. In order to provide end users with a rational basis for selecting a given algorithm for a specific neuroscientific application, we evaluated the performance of six NUC algorithms. We used simulated and real MRI data volumes, including six repeat scans of the same subject, in order to rank the accuracy, precision, and stability of the nonuniformity corrections. We also compared algorithms using data volumes from different subjects and different (1.5T and 3.0T) MRI scanners in order to relate differences in algorithmic performance to intersubject variability and/or differences in scanner performance. In phantom studies, the correlation of the extracted with the applied nonuniformity was highest in the transaxial (left-to-right) direction and lowest in the axial (top-to-bottom) direction. Two of the six algorithms demonstrated a high degree of stability, as measured by the iterative application of the algorithm to its corrected output. While none of the algorithms performed ideally under all circumstances, locally adaptive methods generally outperformed nonadaptive methods. Copyright 2001 Academic Press.

  15. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  16. Improvements to Lunar BRDF-Corrected Nighttime Satellite Imagery: Uses and Applications

    NASA Technical Reports Server (NTRS)

    Cole, Tony A.; Molthan, Andrew L.; Schultz, Lori A.; Roman, Miguel O.; Wanik, David W.

    2016-01-01

    Observations made by the VIIRS day/night band (DNB) provide daily, nighttime measurements to monitor Earth surface processes.However, these observations are impacted by variations in reflected solar radiation on the moon's surface. As the moon transitions from new to full phase, increasing radiance is reflected to the Earth's surface and contributes additional reflected moonlight from clouds and land surface, in addition to emissions from other light sources observed by the DNB. The introduction of a bi-directional reflectance distribution function (BRDF) algorithm serves to remove these lunar variations and normalize observed radiances. Provided by the Terrestrial Information Systems Laboratory at Goddard Space Flight Center, a 1 km gridded lunar BRDF-corrected DNB product and VIIRS cloud mask can be used for a multitude of nighttime applications without influence from the moon. Such applications include the detection of power outages following severe weather events using pre-and post-event DNB imagery, as well as the identification of boat features to curtail illegal fishing practices. This presentation will provide context on the importance of the lunar BRDF correction algorithm and explore the aforementioned uses of this improved DNB product for applied science applications.

  17. Improvements to Lunar BRDF-Corrected Nighttime Satellite Imagery: Uses and Applications

    NASA Astrophysics Data System (ADS)

    Cole, T.; Molthan, A.; Schultz, L. A.; Roman, M. O.; Wanik, D. W.

    2016-12-01

    Observations made by the VIIRS day/night band (DNB) provide daily, nighttime measurements to monitor Earth surface processes. However, these observations are impacted by variations in reflected solar radiation on the moon's surface. As the moon transitions from new to full phase, increasing radiance is reflected to the Earth's surface and contributes additional reflected moonlight from clouds and land surface, in addition to emissions from other light sources observed by the DNB. The introduction of a bi-directional reflectance distribution function (BRDF) algorithm serves to remove these lunar variations and normalize observed radiances. Provided by the Terrestrial Information Systems Laboratory at Goddard Space Flight Center, a 1 km gridded lunar BRDF-corrected DNB product and VIIRS cloud mask can be used for a multitude of nighttime applications without influence from the moon. Such applications include the detection of power outages following severe weather events using pre- and post-event DNB imagery, as well as the identification of boat features to curtail illegal fishing practices. This presentation will provide context on the importance of the lunar BRDF correction algorithm and explore the aforementioned uses of this improved DNB product for applied science applications.

  18. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  19. Determination of stores pointing error due to wing flexibility under flight load

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.

    1995-01-01

    The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.

  20. Control strategy of grid-connected photovoltaic generation system based on GMPPT method

    NASA Astrophysics Data System (ADS)

    Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen

    2018-02-01

    There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.

  1. Off-Angle Iris Correction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Thompson, Joseph T; Karakaya, Mahmut

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not accountmore » for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.« less

  2. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Deutsch, L. J.; Reed, I. S.

    1987-01-01

    A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.

  3. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    NASA Technical Reports Server (NTRS)

    Shao, Howard M.; Reed, Irving S.

    1988-01-01

    A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.

  4. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  5. WE-AB-204-09: Respiratory Motion Correction in 4D-PET by Simultaneous Motion Estimation and Image Reconstruction (SMEIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantari, F; Wang, J; Li, T

    2015-06-15

    Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less

  6. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays.

    PubMed

    Orżanowski, Tomasz

    2016-01-01

    This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.

  7. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina

    PubMed Central

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S.; Bouma, Brett E.; Vakoc, Benjamin J.

    2018-01-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented. PMID:29552388

  8. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina.

    PubMed

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J

    2018-02-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.

  9. Applying Tafkaa For Atmospheric Correction of Aviris Over Coral Ecosystems In The Hawaiian Islands

    NASA Technical Reports Server (NTRS)

    Goodman, James A.; Montes, Marcos J.; Ustin, Susan L.

    2004-01-01

    Growing concern over the health of coastal ecosystems, particularly coral reefs, has produced increased interest in remote sensing as a tool for the management and monitoring of these valuable natural resources. Hyperspectral capabilities show promising results in this regard, but as yet remain somewhat hindered by the technical and physical issues concerning the intervening water layer. One such issue is the ability to atmospherically correct images over shallow aquatic areas, where complications arise due to varying effects from specular reflection, wind blown surface waves, and reflectance from the benthic substrate. Tafkaa, an atmospheric correction algorithm under development at the U.S. Naval Research Laboratory, addresses these variables and provides a viable approach to the atmospheric correction issue. Using imagery from the Advanced Visible InfraRed Imaging Spectrometer (AVIRIS) over two shallow coral ecosystems in the Hawai ian Islands, French Frigate Shoals and Kane ohe Bay, we first demonstrate how land-based atmospheric corrections can be limited in such an environment. We then discuss the input requirements and underlying algorithm concepts of Tafkaa and conclude with examples illustrating the improved performance of Tafkaa using the same AVIRIS images.

  10. Automatic motion correction of clinical shoulder MR images

    NASA Astrophysics Data System (ADS)

    Manduca, Armando; McGee, Kiaran P.; Welch, Edward B.; Felmlee, Joel P.; Ehman, Richard L.

    1999-05-01

    A technique for the automatic correction of motion artifacts in MR images was developed. The algorithm uses only the raw (complex) data from the MR scanner, and requires no knowledge of the patient motion during the acquisition. It operates by searching over the space of possible patient motions and determining the motion which, when used to correct the image, optimizes the image quality. The performance of this algorithm was tested in coronal images of the rotator cuff in a series of 144 patients. A four observer comparison of the autocorrelated images with the uncorrected images demonstrated that motion artifacts were significantly reduced in 48% of the cases. The improvements in image quality were similar to those achieved with a previously reported navigator echo-based adaptive motion correction. The results demonstrate that autocorrelation is a practical technique for retrospectively reducing motion artifacts in a demanding clinical MRI application. It achieves performance comparable to a navigator based correction technique, which is significant because autocorrection does not require an imaging sequence that has been modified to explicitly track motion during acquisition. The approach is flexible and should be readily extensible to other types of MR acquisitions that are corrupted by global motion.

  11. Artifact reduction of different metallic implants in flat detector C-arm CT.

    PubMed

    Hung, S-C; Wu, C-C; Lin, C-J; Guo, W-Y; Luo, C-B; Chang, F-C; Chang, C-Y

    2014-07-01

    Flat detector CT has been increasingly used as a follow-up examination after endovascular intervention. Metal artifact reduction has been successfully demonstrated in coil mass cases, but only in a small series. We attempted to objectively and subjectively evaluate the feasibility of metal artifact reduction with various metallic objects and coil lengths. We retrospectively reprocessed the flat detector CT data of 28 patients (15 men, 13 women; mean age, 55.6 years) after they underwent endovascular treatment (20 coiling ± stent placement, 6 liquid embolizers) or shunt drainage (n = 2) between January 2009 and November 2011 by using a metal artifact reduction correction algorithm. We measured CT value ranges and noise by using region-of-interest methods, and 2 experienced neuroradiologists rated the degrees of improved imaging quality and artifact reduction by comparing uncorrected and corrected images. After we applied the metal artifact reduction algorithm, the CT value ranges and the noise were substantially reduced (1815.3 ± 793.7 versus 231.7 ± 95.9 and 319.9 ± 136.6 versus 45.9 ± 14.0; both P < .001) regardless of the types of metallic objects and various sizes of coil masses. The rater study achieved an overall improvement of imaging quality and artifact reduction (85.7% and 78.6% of cases by 2 raters, respectively), with the greatest improvement in the coiling group, moderate improvement in the liquid embolizers, and the smallest improvement in ventricular shunting (overall agreement, 0.857). The metal artifact reduction algorithm substantially reduced artifacts and improved the objective image quality in every studied case. It also allowed improved diagnostic confidence in most cases. © 2014 by American Journal of Neuroradiology.

  12. Crosstalk effect and its mitigation in Aqua MODIS middle wave infrared bands

    NASA Astrophysics Data System (ADS)

    Sun, Junqiang; Madhavan, Sriharsha; Wang, Menghua

    2017-09-01

    The MODerate-resolution Imaging Spectroradiometer (MODIS) is one of the primary instruments in the National Aeronautics and Space Administration (NASA) Earth Observing System (EOS). The first MODIS instrument was launched in December 1999 on-board the Terra spacecraft. A follow on MODIS was launched on an afternoon orbit in 2002 and is aboard the Aqua spacecraft. Both MODIS instruments are very akin, has 36 bands, among which bands 20 to 25 are Middle Wave Infrared (MWIR) bands covering a wavelength range from approximately 3.750 μm to 4.515 μm. It was found that there was severe contamination in these bands early in mission but the effect has not been characterized and mitigated at the time. The crosstalk effect induces strong striping in the Earth View (EV) images and causes significant retrieval errors in the EV Brightness Temperature (BT) in these bands. An algorithm using a linear approximation derived from on-orbit lunar observations has been developed to correct the crosstalk effect and successfully applied to mitigate the effect in both Terra and Aqua MODIS Long Wave Infrared (LWIR) Photovoltaic (PV) bands. In this paper, the crosstalk effect in the Aqua MWIR bands is investigated and characterized by deriving the crosstalk coefficients using the scheduled Aqua MODIS lunar observations for the MWIR bands. It is shown that there are strong crosstalk contaminations among the five MWIR bands and they also have significant crosstalk contaminations from Short Wave Infrared (SWIR) bands. The crosstalk correction algorithm previously developed is applied to correct the crosstalk effect in these bands. It is demonstrated that the crosstalk correction successfully reduces the striping in the EV images and improves the accuracy of the EV BT in the five bands as was done similarly for LWIR PV bands. The crosstalk correction algorithm should thus be applied to improve both the image quality and radiometric accuracy of the Aqua MODIS MWIR bands Level 1B (L1B) products.

  13. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133

  14. Anisotropic field-of-view shapes for improved PROPELLER imaging☆

    PubMed Central

    Larson, Peder E.Z.; Lustig, Michael S.; Nishimura, Dwight G.

    2010-01-01

    The Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) method for magnetic resonance imaging data acquisition and reconstruction has the highly desirable property of being able to correct for motion during the scan, making it especially useful for imaging pediatric or uncooperative patients and diffusion imaging. This method nominally supports a circular field of view (FOV), but tailoring the FOV for noncircular shapes results in more efficient, shorter scans. This article presents new algorithms for tailoring PROPELLER acquisitions to the desired FOV shape and size that are flexible and precise. The FOV design also allows for rotational motion which provides better motion correction and reduced aliasing artifacts. Some possible FOV shapes demonstrated are ellipses, ovals and rectangles, and any convex, pi-symmetric shape can be designed. Standard PROPELLER reconstruction is used with minor modifications, and results with simulated motion presented confirm the effectiveness of the motion correction with these modified FOV shapes. These new acquisition design algorithms are simple and fast enough to be computed for each individual scan. Also presented are algorithms for further scan time reductions in PROPELLER echo-planar imaging (EPI) acquisitions by varying the sample spacing in two directions within each blade. PMID:18818039

  15. Statistical Evaluation of Combined Daily Gauge Observations and Rainfall Satellite Estimations over Continental South America

    NASA Technical Reports Server (NTRS)

    Vila, Daniel; deGoncalves, Luis Gustavo; Toll, David L.; Rozante, Jose Roberto

    2008-01-01

    This paper describes a comprehensive assessment of a new high-resolution, high-quality gauge-satellite based analysis of daily precipitation over continental South America during 2004. This methodology is based on a combination of additive and multiplicative bias correction schemes in order to get the lowest bias when compared with the observed values. Inter-comparisons and cross-validations tests have been carried out for the control algorithm (TMPA real-time algorithm) and different merging schemes: additive bias correction (ADD), ratio bias correction (RAT) and TMPA research version, for different months belonging to different seasons and for different network densities. All compared merging schemes produce better results than the control algorithm, but when finer temporal (daily) and spatial scale (regional networks) gauge datasets is included in the analysis, the improvement is remarkable. The Combined Scheme (CoSch) presents consistently the best performance among the five techniques. This is also true when a degraded daily gauge network is used instead of full dataset. This technique appears a suitable tool to produce real-time, high-resolution, high-quality gauge-satellite based analyses of daily precipitation over land in regional domains.

  16. Cost-effective Diagnostic Checklists for Meningitis in Resource Limited Settings

    PubMed Central

    Durski, Kara N.; Kuntz, Karen M.; Yasukawa, Kosuke; Virnig, Beth A.; Meya, David B.; Boulware, David R.

    2013-01-01

    Background Checklists can standardize patient care, reduce errors, and improve health outcomes. For meningitis in resource-limited settings, with high patient loads and limited financial resources, CNS diagnostic algorithms may be useful to guide diagnosis and treatment. However, the cost-effectiveness of such algorithms is unknown. Methods We used decision analysis methodology to evaluate the costs, diagnostic yield, and cost-effectiveness of diagnostic strategies for adults with suspected meningitis in resource limited settings with moderate/high HIV prevalence. We considered three strategies: 1) comprehensive “shotgun” approach of utilizing all routine tests; 2) “stepwise” strategy with tests performed in a specific order with additional TB diagnostics; 3) “minimalist” strategy of sequential ordering of high-yield tests only. Each strategy resulted in one of four meningitis diagnoses: bacterial (4%), cryptococcal (59%), TB (8%), or other (aseptic) meningitis (29%). In model development, we utilized prevalence data from two Ugandan sites and published data on test performance. We validated the strategies with data from Malawi, South Africa, and Zimbabwe. Results The current comprehensive testing strategy resulted in 93.3% correct meningitis diagnoses costing $32.00/patient. A stepwise strategy had 93.8% correct diagnoses costing an average of $9.72/patient, and a minimalist strategy had 91.1% correct diagnoses costing an average of $6.17/patient. The incremental cost effectiveness ratio was $133 per additional correct diagnosis for the stepwise over minimalist strategy. Conclusions Through strategically choosing the order and type of testing coupled with disease prevalence rates, algorithms can deliver more care more efficiently. The algorithms presented herein are generalizable to East Africa and Southern Africa. PMID:23466647

  17. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    PubMed Central

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  18. Adaptation of a Hyperspectral Atmospheric Correction Algorithm for Multi-spectral Ocean Color Data in Coastal Waters. Chapter 3

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.

    2003-01-01

    This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.

  19. The impact of physiologic noise correction applied to functional MRI of pain at 1.5 and 3.0 T.

    PubMed

    Vogt, Keith M; Ibinson, James W; Schmalbrock, Petra; Small, Robert H

    2011-07-01

    This study quantified the impact of the well-known physiologic noise correction algorithm RETROICOR applied to a pain functional magnetic resonance imaging (FMRI) experiment at two field strengths: 1.5 and 3.0 T. In the 1.5-T acquisition, there was an 8.2% decrease in time course variance (σ) and a 227% improvement in average model fit (increase in mean R(2)(a)). In the 3.0-T acquisition, significantly greater improvements were seen: a 10.4% decrease in σ and a 240% increase in mean R(2)(a). End-tidal carbon dioxide data were also collected during scanning and used to account for low-frequency changes in cerebral blood flow; however, the impact of this correction was trivial compared to applying RETROICOR. Comparison between two implementations of RETROICOR demonstrated that oversampled physiologic data can be applied by either downsampling or modification of the timing in the RETROICOR algorithm, with equivalent results. Furthermore, there was no significant effect from manually aligning the physiologic data with corresponding image slices from an interleaved acquisition, indicating that RETROICOR accounts for timing differences between physiologic changes and MR signal changes. These findings suggest that RETROICOR correction, as it is commonly implemented, should be included as part of the data analysis for pain FMRI studies performed at 1.5 and 3.0 T. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  1. Evaluation of amplitude-based sorting algorithm to reduce lung tumor blurring in PET images using 4D NCAT phantom.

    PubMed

    Wang, Jiali; Byrne, James; Franquiz, Juan; McGoron, Anthony

    2007-08-01

    develop and validate a PET sorting algorithm based on the respiratory amplitude to correct for abnormal respiratory cycles. using the 4D NCAT phantom model, 3D PET images were simulated in lung and other structures at different times within a respiratory cycle and noise was added. To validate the amplitude binning algorithm, NCAT phantom was used to simulate one case of five different respiratory periods and another case of five respiratory periods alone with five respiratory amplitudes. Comparison was performed for gated and un-gated images and for the new amplitude binning algorithm with the time binning algorithm by calculating the mean number of counts in the ROI (region of interest). an average of 8.87+/-5.10% improvement was reported for total 16 tumors with different tumor sizes and different T/B (tumor to background) ratios using the new sorting algorithm. As both the T/B ratio and tumor size decreases, image degradation due to respiration increases. The greater benefit for smaller diameter tumor and lower T/B ratio indicates a potential improvement in detecting more problematic tumors.

  2. Development and Evaluation of A Novel and Cost-Effective Approach for Low-Cost NO₂ Sensor Drift Correction.

    PubMed

    Sun, Li; Westerdahl, Dane; Ning, Zhi

    2017-08-19

    Emerging low-cost gas sensor technologies have received increasing attention in recent years for air quality measurements due to their small size and convenient deployment. However, in the diverse applications these sensors face many technological challenges, including sensor drift over long-term deployment that cannot be easily addressed using mathematical correction algorithms or machine learning methods. This study aims to develop a novel approach to auto-correct the drift of commonly used electrochemical nitrogen dioxide (NO₂) sensor with comprehensive evaluation of its application. The impact of environmental factors on the NO₂ electrochemical sensor in low-ppb concentration level measurement was evaluated in laboratory and the temperature and relative humidity correction algorithm was evaluated. An automated zeroing protocol was developed and assessed using a chemical absorbent to remove NO₂ as a means to perform zero correction in varying ambient conditions. The sensor system was operated in three different environments in which data were compared to a reference NO₂ analyzer. The results showed that the zero-calibration protocol effectively corrected the observed drift of the sensor output. This technique offers the ability to enhance the performance of low-cost sensor based systems and these findings suggest extension of the approach to improve data quality from sensors measuring other gaseous pollutants in urban air.

  3. Intelligent correction of laser beam propagation through turbulent media using adaptive optics

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2014-10-01

    Adaptive optics methods have long been used by researchers in the astronomy field to retrieve correct images of celestial bodies. The approach is to use a deformable mirror combined with Shack-Hartmann sensors to correct the slightly distorted image when it propagates through the earth's atmospheric boundary layer, which can be viewed as adding relatively weak distortion in the last stage of propagation. However, the same strategy can't be easily applied to correct images propagating along a horizontal deep turbulence path. In fact, when turbulence levels becomes very strong (Cn 2>10-13 m-2/3), limited improvements have been made in correcting the heavily distorted images. We propose a method that reconstructs the light field that reaches the camera, which then provides information for controlling a deformable mirror. An intelligent algorithm is applied that provides significant improvement in correcting images. In our work, the light field reconstruction has been achieved with a newly designed modified plenoptic camera. As a result, by actively intervening with the coherent illumination beam, or by giving it various specific pre-distortions, a better (less turbulence affected) image can be obtained. This strategy can also be expanded to much more general applications such as correcting laser propagation through random media and can also help to improve designs in free space optical communication systems.

  4. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  5. Excimer laser correction of hyperopia, hyperopic and mixed astigmatism: past, present, and future.

    PubMed

    Lukenda, Adrian; Martinović, Zeljka Karaman; Kalauz, Miro

    2012-06-01

    The broad acceptance of "spot scanning" or "flying spot" excimer lasers in the last decade has enabled the domination of corneal ablative laser surgery over other refractive surgical procedures for the correction of hyperopia, hyperopic and mixed astigmatism. This review outlines the most important reasons why the ablative laser correction of hyperopia, hyperopic and mixed astigmatism for many years lagged behind that of myopia. Most of today's scanning laser systems, used in the LASIK and PRK procedures, can safely and effectively perform low, moderate and high hyperopic and hyperopic astigmatic corrections. The introduction of these laser platforms has also significantly improved the long term refractive stability of hyperopic treatments. In the future, further improvements in femtosecond and nanosecond technology, eye-tracker systems, and the development of new customized algorithms, such as the ray-tracing method, could additionally increase the upper limit for the safe and predictable corneal ablative laser correction ofhyperopia, hyperopic and mixed astigmatism.

  6. Trigram-based algorithms for OCR result correction

    NASA Astrophysics Data System (ADS)

    Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Faradjev, Igor; Janiszewski, Igor

    2017-03-01

    In this paper we consider a task of improving optical character recognition (OCR) results of document fields on low-quality and average-quality images using N-gram models. Cyrillic fields of Russian Federation internal passport are analyzed as an example. Two approaches are presented: the first one is based on hypothesis of dependence of a symbol from two adjacent symbols and the second is based on calculation of marginal distributions and Bayesian networks computation. A comparison of the algorithms and experimental results within a real document OCR system are presented, it's showed that the document field OCR accuracy can be improved by more than 6% for low-quality images.

  7. Qualitative and quantitative evaluation of rigid and deformable motion correction algorithms using dual-energy CT images in view of application to CT perfusion measurements in abdominal organs affected by breathing motion

    PubMed Central

    Skornitzke, S; Fritz, F; Klauss, M; Pahn, G; Hansen, J; Hirsch, J; Grenacher, L; Kauczor, H-U

    2015-01-01

    Objective: To compare six different scenarios for correcting for breathing motion in abdominal dual-energy CT (DECT) perfusion measurements. Methods: Rigid [RRComm(80 kVp)] and non-rigid [NRComm(80 kVp)] registration of commercially available CT perfusion software, custom non-rigid registration [NRCustom(80 kVp], demons algorithm) and a control group [CG(80 kVp)] without motion correction were evaluated using 80 kVp images. Additionally, NRCustom was applied to dual-energy (DE)-blended [NRCustom(DE)] and virtual non-contrast [NRCustom(VNC)] images, yielding six evaluated scenarios. After motion correction, perfusion maps were calculated using a combined maximum slope/Patlak model. For qualitative evaluation, three blinded radiologists independently rated motion correction quality and resulting perfusion maps on a four-point scale (4 = best, 1 = worst). For quantitative evaluation, relative changes in metric values, R2 and residuals of perfusion model fits were calculated. Results: For motion-corrected images, mean ratings differed significantly [NRCustom(80 kVp) and NRCustom(DE), 3.3; NRComm(80 kVp), 3.1; NRCustom(VNC), 2.9; RRComm(80 kVp), 2.7; CG(80 kVp), 2.7; all p < 0.05], except when comparing NRCustom(80 kVp) with NRCustom(DE) and RRComm(80 kVp) with CG(80 kVp). NRCustom(80 kVp) and NRCustom(DE) achieved the highest reduction in metric values [NRCustom(80 kVp), 48.5%; NRCustom(DE), 45.6%; NRComm(80 kVp), 29.2%; NRCustom(VNC), 22.8%; RRComm(80 kVp), 0.6%; CG(80 kVp), 0%]. Regarding perfusion maps, NRCustom(80 kVp) and NRCustom(DE) were rated highest [NRCustom(80 kVp), 3.1; NRCustom(DE), 3.0; NRComm(80 kVp), 2.8; NRCustom(VNC), 2.6; CG(80 kVp), 2.5; RRComm(80 kVp), 2.4] and had significantly higher R2 and lower residuals. Correlation between qualitative and quantitative evaluation was low to moderate. Conclusion: Non-rigid motion correction improves spatial alignment of the target region and fit of CT perfusion models. Using DE-blended and DE-VNC images for deformable registration offers no significant improvement. Advances in knowledge: Non-rigid algorithms improve the quality of abdominal CT perfusion measurements but do not benefit from DECT post processing. PMID:25465353

  8. Reinforcement Learning in Distributed Domains: Beyond Team Games

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Sill, Joseph; Turner, Kagan

    2000-01-01

    Distributed search algorithms are crucial in dealing with large optimization problems, particularly when a centralized approach is not only impractical but infeasible. Many machine learning concepts have been applied to search algorithms in order to improve their effectiveness. In this article we present an algorithm that blends Reinforcement Learning (RL) and hill climbing directly, by using the RL signal to guide the exploration step of a hill climbing algorithm. We apply this algorithm to the domain of a constellations of communication satellites where the goal is to minimize the loss of importance weighted data. We introduce the concept of 'ghost' traffic, where correctly setting this traffic induces the satellites to act to optimize the world utility. Our results indicated that the bi-utility search introduced in this paper outperforms both traditional hill climbing algorithms and distributed RL approaches such as team games.

  9. Correction of geometric distortion in Propeller echo planar imaging using a modified reversed gradient approach.

    PubMed

    Chang, Hing-Chiu; Chuang, Tzu-Chao; Lin, Yi-Ru; Wang, Fu-Nien; Huang, Teng-Yi; Chung, Hsiao-Wen

    2013-04-01

    This study investigates the application of a modified reversed gradient algorithm to the Propeller-EPI imaging method (periodically rotated overlapping parallel lines with enhanced reconstruction based on echo-planar imaging readout) for corrections of geometric distortions due to the EPI readout. Propeller-EPI acquisition was executed with 360-degree rotational coverage of the k-space, from which the image pairs with opposite phase-encoding gradient polarities were extracted for reversed gradient geometric and intensity corrections. The spatial displacements obtained on a pixel-by-pixel basis were fitted using a two-dimensional polynomial followed by low-pass filtering to assure correction reliability in low-signal regions. Single-shot EPI images were obtained on a phantom, whereas high spatial resolution T2-weighted and diffusion tensor Propeller-EPI data were acquired in vivo from healthy subjects at 3.0 Tesla, to demonstrate the effectiveness of the proposed algorithm. Phantom images show success of the smoothed displacement map concept in providing improvements of the geometric corrections at low-signal regions. Human brain images demonstrate prominently superior reconstruction quality of Propeller-EPI images with modified reversed gradient corrections as compared with those obtained without corrections, as evidenced from verification against the distortion-free fast spin-echo images at the same level. The modified reversed gradient method is an effective approach to obtain high-resolution Propeller-EPI images with substantially reduced artifacts.

  10. Improved peak detection in mass spectrum by incorporating continuous wavelet transform-based pattern matching.

    PubMed

    Du, Pan; Kibbe, Warren A; Lin, Simon M

    2006-09-01

    A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. The algorithm is implemented in R and will be included as an open source module in the Bioconductor project.

  11. An improved optimization algorithm of the three-compartment model with spillover and partial volume corrections for dynamic FDG PET images of small animal hearts in vivo

    NASA Astrophysics Data System (ADS)

    Li, Yinlin; Kundu, Bijoy K.

    2018-03-01

    The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged  -1.4  ±  8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4  ±  5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic algorithm. The proposed method significantly improved the model estimation performance in terms of the accuracy of the MCIF and K i , as well as the convergence speed.

  12. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging

    PubMed Central

    He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data. PMID:29112151

  13. An AK-LDMeans algorithm based on image clustering

    NASA Astrophysics Data System (ADS)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  14. Gpm Level 1 Science Requirements: Science and Performance Viewed from the Ground

    NASA Technical Reports Server (NTRS)

    Petersen, W.; Kirstetter, P.; Wolff, D.; Kidd, C.; Tokay, A.; Chandrasekar, V.; Grecu, M.; Huffman, G.; Jackson, G. S.

    2016-01-01

    GPM meets Level 1 science requirements for rain estimation based on the strong performance of its radar algorithms. Changes in the V5 GPROF algorithm should correct errors in V4 and will likely resolve GPROF performance issues relative to L1 requirements. L1 FOV Snow detection largely verified but at unknown SWE rate threshold (likely < 0.5 –1 mm/hr/liquid equivalent). Ongoing work to improve SWE rate estimation for both satellite and GV remote sensing.

  15. De-identifying an EHR database - anonymity, correctness and readability of the medical record.

    PubMed

    Pantazos, Kostas; Lauesen, Soren; Lippert, Soren

    2011-01-01

    Electronic health records (EHR) contain a large amount of structured data and free text. Exploring and sharing clinical data can improve healthcare and facilitate the development of medical software. However, revealing confidential information is against ethical principles and laws. We de-identified a Danish EHR database with 437,164 patients. The goal was to generate a version with real medical records, but related to artificial persons. We developed a de-identification algorithm that uses lists of named entities, simple language analysis, and special rules. Our algorithm consists of 3 steps: collect lists of identifiers from the database and external resources, define a replacement for each identifier, and replace identifiers in structured data and free text. Some patient records could not be safely de-identified, so the de-identified database has 323,122 patient records with an acceptable degree of anonymity, readability and correctness (F-measure of 95%). The algorithm has to be adjusted for each culture, language and database.

  16. Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.

    PubMed

    Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying

    2016-03-21

    Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.

  17. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    PubMed

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-06-12

    Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.

  18. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  19. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  20. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.

    PubMed

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-08-14

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.

  1. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  2. Breakthrough in current-in-plane tunneling measurement precision by application of multi-variable fitting algorithm.

    PubMed

    Cagliani, Alberto; Østerberg, Frederik W; Hansen, Ole; Shiv, Lior; Nielsen, Peter F; Petersen, Dirch H

    2017-09-01

    We present a breakthrough in micro-four-point probe (M4PP) metrology to substantially improve precision of transmission line (transfer length) type measurements by application of advanced electrode position correction. In particular, we demonstrate this methodology for the M4PP current-in-plane tunneling (CIPT) technique. The CIPT method has been a crucial tool in the development of magnetic tunnel junction (MTJ) stacks suitable for magnetic random-access memories for more than a decade. On two MTJ stacks, the measurement precision of resistance-area product and tunneling magnetoresistance was improved by up to a factor of 3.5 and the measurement reproducibility by up to a factor of 17, thanks to our improved position correction technique.

  3. Atmospheric Correction Algorithm for Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolutemore » calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  5. Digital algorithm for dispersion correction in optical coherence tomography for homogeneous and stratified media.

    PubMed

    Marks, Daniel L; Oldenburg, Amy L; Reynolds, J Joshua; Boppart, Stephen A

    2003-01-10

    The resolution of optical coherence tomography (OCT) often suffers from blurring caused by material dispersion. We present a numerical algorithm for computationally correcting the effect of material dispersion on OCT reflectance data for homogeneous and stratified media. This is experimentally demonstrated by correcting the image of a polydimethyl siloxane microfludic structure and of glass slides. The algorithm can be implemented using the fast Fourier transform. With broad spectral bandwidths and highly dispersive media or thick objects, dispersion correction becomes increasingly important.

  6. Digital Algorithm for Dispersion Correction in Optical Coherence Tomography for Homogeneous and Stratified Media

    NASA Astrophysics Data System (ADS)

    Marks, Daniel L.; Oldenburg, Amy L.; Reynolds, J. Joshua; Boppart, Stephen A.

    2003-01-01

    The resolution of optical coherence tomography (OCT) often suffers from blurring caused by material dispersion. We present a numerical algorithm for computationally correcting the effect of material dispersion on OCT reflectance data for homogeneous and stratified media. This is experimentally demonstrated by correcting the image of a polydimethyl siloxane microfludic structure and of glass slides. The algorithm can be implemented using the fast Fourier transform. With broad spectral bandwidths and highly dispersive media or thick objects, dispersion correction becomes increasingly important.

  7. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    NASA Astrophysics Data System (ADS)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  8. Correction algorithm for online continuous flow δ13C and δ18O carbonate and cellulose stable isotope analyses

    NASA Astrophysics Data System (ADS)

    Evans, M. N.; Selmer, K. J.; Breeden, B. T.; Lopatka, A. S.; Plummer, R. E.

    2016-09-01

    We describe an algorithm to correct for scale compression, runtime drift, and amplitude effects in carbonate and cellulose oxygen and carbon isotopic analyses made on two online continuous flow isotope ratio mass spectrometry (CF-IRMS) systems using gas chromatographic (GC) separation. We validate the algorithm by correcting measurements of samples of known isotopic composition which are not used to estimate the corrections. For carbonate δ13C (δ18O) data, median precision of validation estimates for two reference materials and two calibrated working standards is 0.05‰ (0.07‰); median bias is 0.04‰ (0.02‰) over a range of 49.2‰ (24.3‰). For α-cellulose δ13C (δ18O) data, median precision of validation estimates for one reference material and five working standards is 0.11‰ (0.27‰); median bias is 0.13‰ (-0.10‰) over a range of 16.1‰ (19.1‰). These results are within the 5th-95th percentile range of subsequent routine runtime validation exercises in which one working standard is used to calibrate the other. Analysis of the relative importance of correction steps suggests that drift and scale-compression corrections are most reliable and valuable. If validation precisions are not already small, routine cross-validated precision estimates are improved by up to 50% (80%). The results suggest that correction for systematic error may enable these particular CF-IRMS systems to produce δ13C and δ18O carbonate and cellulose isotopic analyses with higher validated precision, accuracy, and throughput than is typically reported for these systems. The correction scheme may be used in support of replication-intensive research projects in paleoclimatology and other data-intensive applications within the geosciences.

  9. A single chip VLSI Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.

    1986-01-01

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.

  10. Magnetometer bias determination and attitude determination for near-earth spacecraft

    NASA Technical Reports Server (NTRS)

    Lerner, G. M.; Shuster, M. D.

    1979-01-01

    A simple linear-regression algorithm is used to determine simultaneously magnetometer biases, misalignments, and scale factor corrections, as well as the dependence of the measured magnetic field on magnetic control systems. This algorithm has been applied to data from the Seasat-1 and the Atmosphere Explorer Mission-1/Heat Capacity Mapping Mission (AEM-1/HCMM) spacecraft. Results show that complete inflight calibration as described here can improve significantly the accuracy of attitude solutions obtained from magnetometer measurements. This report discusses the difficulties involved in obtaining attitude information from three-axis magnetometers, briefly derives the calibration algorithm, and presents numerical results for the Seasat-1 and AEM-1/HCMM spacecraft.

  11. Noniterative algorithm for improving the accuracy of a multicolor-light-emitting-diode-based colorimeter.

    PubMed

    Yang, Pao-Keng

    2012-05-01

    We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.

  12. Noniterative algorithm for improving the accuracy of a multicolor-light-emitting-diode-based colorimeter

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2012-05-01

    We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.

  13. Validation of model-based deformation correction in image-guided liver surgery via tracked intraoperative ultrasound: preliminary method and results

    NASA Astrophysics Data System (ADS)

    Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2015-03-01

    Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.

  14. An algorithm developed in Matlab for the automatic selection of cut-off frequencies, in the correction of strong motion data

    NASA Astrophysics Data System (ADS)

    Sakkas, Georgios; Sakellariou, Nikolaos

    2018-05-01

    Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.

  15. Modified echo peak correction for radial acquisition regime (RADAR).

    PubMed

    Takizawa, Masahiro; Ito, Taeko; Itagaki, Hiroyuki; Takahashi, Tetsuhiko; Shimizu, Kanichirou; Harada, Junta

    2009-01-01

    Because radial sampling imposes many limitations on magnetic resonance (MR) imaging hardware, such as on the accuracy of the gradient magnetic field or the homogeneity of B(0), some correction of the echo signal is usually needed before image reconstruction. In our previous study, we developed an echo-peak-shift correction (EPSC) algorithm not easily affected by hardware performance. However, some artifacts remained in lung imaging, where tissue is almost absent, or in cardiac imaging, which is affected by blood flow. In this study, we modified the EPSC algorithm to improve the image quality of the radial aquisition regime (RADAR) and expand its application sequences. We assumed the artifacts were mainly caused by errors in the phase map for EPSC and used a phantom on a 1.5-tesla (T) MR scanner to investigate whether to modify the EPSC algorithm. To evaluate the effectiveness of EPSC, we compared results from T(1)- and T(2)-weighted images of a volunteer's lung region using the current and modified EPSC. We then applied the modified EPSC to RADAR spin echo (SE) and RADAR balanced steady-state acquisition with rewound gradient echo (BASG) sequence. The modified EPSC reduced phase discontinuity in the reference data used for EPSC and improved visualization of blood vessels in the lungs. Motion and blood flow caused no visible artifacts in the resulting images in either RADAR SE or RADAR BASG sequence. Use of the modified EPSC eliminated artifacts caused by signal loss in the reference data for EPSC. In addition, the modified EPSC was applied to RADAR SE and RADAR BASG sequences.

  16. Refraction corrected calibration for aquatic locomotion research: application of Snell's law improves spatial accuracy.

    PubMed

    Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L

    2015-07-07

    Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.

  17. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera.

    PubMed

    de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell

    2007-01-10

    We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.

  18. Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2005-01-01

    Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwai, P; Lins, L Nadler

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT ormore » IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.« less

  20. Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.

    PubMed

    Latha, Indu; Reichenbach, Stephen E; Tao, Qingping

    2011-09-23

    Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  2. A new method for incoherent combining of far-field laser beams based on multiple faculae recognition

    NASA Astrophysics Data System (ADS)

    Ye, Demao; Li, Sichao; Yan, Zhihui; Zhang, Zenan; Liu, Yuan

    2018-03-01

    Compared to coherent beam combining, incoherent beam combining can complete the output of high power laser beam with high efficiency, simple structure, low cost and high thermal damage resistance, and it is easy to realize in engineering. Higher target power is achieved by incoherent beam combination which using technology of multi-channel optical path correction. However, each channel forms a spot in the far field respectively, which cannot form higher laser power density with low overlap ratio of faculae. In order to improve the combat effectiveness of the system, it is necessary to overlap different faculae that improve the target energy density. Hence, a novel method for incoherent combining of far-field laser beams is present. The method compromises piezoelectric ceramic technology and evaluation algorithm of faculae coincidence degree which based on high precision multi-channel optical path correction. The results show that the faculae recognition algorithm is low-latency(less than 10ms), which can meet the needs of practical engineering. Furthermore, the real time focusing ability of far field faculae is improved which was beneficial to the engineering of high-energy laser weapon or other laser jamming systems.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao

    Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less

  4. Optimal spiral phase modulation in Gerchberg-Saxton algorithm for wavefront reconstruction and correction

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Běhal, J.; Bouchal, Z.

    2018-01-01

    In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.

  5. A novel algorithm for notch detection

    NASA Astrophysics Data System (ADS)

    Acosta, C.; Salazar, D.; Morales, D.

    2013-06-01

    It is common knowledge that DFM guidelines require revisions to design data. These guidelines impose the need for corrections inserted into areas within the design data flow. At times, this requires rather drastic modifications to the data, both during the layer derivation or DRC phase, and especially within the RET phase. For example, OPC. During such data transformations, several polygon geometry changes are introduced, which can substantially increase shot count, geometry complexity, and eventually conversion to mask writer machine formats. In this resulting complex data, it may happen that notches are found that do not significantly contribute to the final manufacturing results, but do in fact contribute to the complexity of the surrounding geometry, and are therefore undesirable. Additionally, there are cases in which the overall figure count can be reduced with minimum impact in the quality of the corrected data, if notches are detected and corrected. Case in point, there are other cases where data quality could be improved if specific valley notches are filled in, or peak notches are cut out. Such cases generally satisfy specific geometrical restrictions in order to be valid candidates for notch correction. Traditional notch detection has been done for rectilinear data (Manhattan-style) and only in axis-parallel directions. The traditional approaches employ dimensional measurement algorithms that measure edge distances along the outside of polygons. These approaches are in general adaptations, and therefore ill-fitted for generalized detection of notches with strange shapes and in strange rotations. This paper covers a novel algorithm developed for the CATS MRCC tool that finds both valley and/or peak notches that are candidates for removal. The algorithm is generalized and invariant to data rotation, so that it can find notches in data rotated in any angle. It includes parameters to control the dimensions of detected notches, as well as algorithm tolerances and data reach.

  6. The TOMS V9 Algorithm for OMPS Nadir Mapper Total Ozone: An Enhanced Design That Ensures Data Continuity

    NASA Astrophysics Data System (ADS)

    Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.

    2015-12-01

    The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.

  7. Self-Adaptive Correction of Heading Direction in Stair Climbing for Tracked Mobile Robots Using Visual Servoing Approach

    NASA Astrophysics Data System (ADS)

    Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu

    2017-02-01

    In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.

  8. Hybrid ECG-gated versus non-gated 512-slice CT angiography of the aorta and coronary artery: image quality and effect of a motion correction algorithm.

    PubMed

    Lee, Ji Won; Kim, Chang Won; Lee, Geewon; Lee, Han Cheol; Kim, Sang-Pil; Choi, Bum Sung; Jeong, Yeon Joo

    2018-02-01

    Background Using the hybrid electrocardiogram (ECG)-gated computed tomography (CT) technique, assessment of entire aorta, coronary arteries, and aortic valve can be possible using single-bolus contrast administration within a single acquisition. Purpose To compare the image quality of hybrid ECG-gated and non-gated CT angiography of the aorta and evaluate the effect of a motion correction algorithm (MCA) on coronary artery image quality in a hybrid ECG-gated aorta CT group. Material and Methods In total, 104 patients (76 men; mean age = 65.8 years) prospectively randomized into two groups (Group 1 = hybrid ECG-gated CT; Group 2 = non-gated CT) underwent wide-detector array aorta CT. Image quality, assessed using a four-point scale, was compared between the groups. Coronary artery image quality was compared between the conventional reconstruction and motion correction reconstruction subgroups in Group 1. Results Group 1 showed significant advantages over Group 2 in aortic wall, cardiac chamber, aortic valve, coronary ostia, and main coronary arteries image quality (all P < 0.001). All Group 1 patients had diagnostic image quality of the aortic wall and left ostium. The MCA significantly improved the image quality of the three main coronary arteries ( P < 0.05). Moreover, per-vessel interpretability improved from 92.3% to 97.1% with the MCA ( P = 0.013). Conclusion Hybrid ECG-gated CT significantly improved the heart and aortic wall image quality and the MCA can further improve the image quality and interpretability of coronary arteries.

  9. Camera pose estimation to improve accuracy and reliability of joint angles assessed with attitude and heading reference systems.

    PubMed

    Lebel, Karina; Hamel, Mathieu; Duval, Christian; Nguyen, Hung; Boissy, Patrick

    2018-01-01

    Joint kinematics can be assessed using orientation estimates from Attitude and Heading Reference Systems (AHRS). However, magnetically-perturbed environments affect the accuracy of the estimated orientations. This study investigates, both in controlled and human mobility conditions, a trial calibration technic based on a 2D photograph with a pose estimation algorithm to correct initial difference in AHRS Inertial reference frames and improve joint angle accuracy. In controlled conditions, two AHRS were solidly affixed onto a wooden stick and a series of static and dynamic trials were performed in varying environments. Mean accuracy of relative orientation between the two AHRS was improved from 24.4° to 2.9° using the proposed correction method. In human conditions, AHRS were placed on the shank and the foot of a participant who performed repeated trials of straight walking and walking while turning, varying the level of magnetic perturbation in the starting environment and the walking speed. Mean joint orientation accuracy went from 6.7° to 2.8° using the correction algorithm. The impact of starting environment was also greatly reduced, up to a point where one could consider it as non-significant from a clinical point of view (maximum mean difference went from 8° to 0.6°). The results obtained demonstrate that the proposed method improves significantly the mean accuracy of AHRS joint orientation estimations in magnetically-perturbed environments and can be implemented in post processing of AHRS data collected during biomechanical evaluation of motion. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Energy design for protein-protein interactions

    PubMed Central

    Ravikant, D. V. S.; Elber, Ron

    2011-01-01

    Proteins bind to other proteins efficiently and specifically to carry on many cell functions such as signaling, activation, transport, enzymatic reactions, and more. To determine the geometry and strength of binding of a protein pair, an energy function is required. An algorithm to design an optimal energy function, based on empirical data of protein complexes, is proposed and applied. Emphasis is made on negative design in which incorrect geometries are presented to the algorithm that learns to avoid them. For the docking problem the search for plausible geometries can be performed exhaustively. The possible geometries of the complex are generated on a grid with the help of a fast Fourier transform algorithm. A novel formulation of negative design makes it possible to investigate iteratively hundreds of millions of negative examples while monotonically improving the quality of the potential. Experimental structures for 640 protein complexes are used to generate positive and negative examples for learning parameters. The algorithm designed in this work finds the correct binding structure as the lowest energy minimum in 318 cases of the 640 examples. Further benchmarks on independent sets confirm the significant capacity of the scoring function to recognize correct modes of interactions. PMID:21842951

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naseri, M; Rajabi, H; Wang, J

    Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less

  12. Development and Evaluation of A Novel and Cost-Effective Approach for Low-Cost NO2 Sensor Drift Correction

    PubMed Central

    Sun, Li; Westerdahl, Dane; Ning, Zhi

    2017-01-01

    Emerging low-cost gas sensor technologies have received increasing attention in recent years for air quality measurements due to their small size and convenient deployment. However, in the diverse applications these sensors face many technological challenges, including sensor drift over long-term deployment that cannot be easily addressed using mathematical correction algorithms or machine learning methods. This study aims to develop a novel approach to auto-correct the drift of commonly used electrochemical nitrogen dioxide (NO2) sensor with comprehensive evaluation of its application. The impact of environmental factors on the NO2 electrochemical sensor in low-ppb concentration level measurement was evaluated in laboratory and the temperature and relative humidity correction algorithm was evaluated. An automated zeroing protocol was developed and assessed using a chemical absorbent to remove NO2 as a means to perform zero correction in varying ambient conditions. The sensor system was operated in three different environments in which data were compared to a reference NO2 analyzer. The results showed that the zero-calibration protocol effectively corrected the observed drift of the sensor output. This technique offers the ability to enhance the performance of low-cost sensor based systems and these findings suggest extension of the approach to improve data quality from sensors measuring other gaseous pollutants in urban air. PMID:28825633

  13. Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances

    2006-01-01

    In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.

  14. Implementation and performance of shutterless uncooled micro-bolometer cameras

    NASA Astrophysics Data System (ADS)

    Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.

    2015-06-01

    A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.

  15. Comparison of human and algorithmic target detection in passive infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Hutchinson, Meredith

    2003-09-01

    We have designed an experiment that compares the performance of human observers and a scale-insensitive target detection algorithm that uses pixel level information for the detection of ground targets in passive infrared imagery. The test database contains targets near clutter whose detectability ranged from easy to very difficult. Results indicate that human observers detect more "easy-to-detect" targets, and with far fewer false alarms, than the algorithm. For "difficult-to-detect" targets, human and algorithm detection rates are considerably degraded, and algorithm false alarms excessive. Analysis of detections as a function of observer confidence shows that algorithm confidence attribution does not correspond to human attribution, and does not adequately correlate with correct detections. The best target detection score for any human observer was 84%, as compared to 55% for the algorithm for the same false alarm rate. At 81%, the maximum detection score for the algorithm, the same human observer had 6 false alarms per frame as compared to 29 for the algorithm. Detector ROC curves and observer-confidence analysis benchmarks the algorithm and provides insights into algorithm deficiencies and possible paths to improvement.

  16. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph

    NASA Astrophysics Data System (ADS)

    Dong, Bing; Ren, De-Qing; Zhang, Xi

    2011-08-01

    An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.

  17. Improvements in Night-Time Low Cloud Detection and MODIS-Style Cloud Optical Properties from MSG SEVIRI

    NASA Technical Reports Server (NTRS)

    Wind, Galina (Gala); Platnick, Steven; Riedi, Jerome

    2011-01-01

    The MODIS cloud optical properties algorithm (MOD06IMYD06 for Terra and Aqua MODIS, respectively) slated for production in Data Collection 6 has been adapted to execute using available channels on MSG SEVIRI. Available MODIS-style retrievals include IR Window-derived cloud top properties, using the new Collection 6 cloud top properties algorithm, cloud optical thickness from VISINIR bands, cloud effective radius from 1.6 and 3.7Jlm and cloud ice/water path. We also provide pixel-level uncertainty estimate for successful retrievals. It was found that at nighttime the SEVIRI cloud mask tends to report unnaturally low cloud fraction for marine stratocumulus clouds. A correction algorithm that improves detection of such clouds has been developed. We will discuss the improvements to nighttime low cloud detection for SEVIRI and show examples and comparisons with MODIS and CALIPSO. We will also show examples of MODIS-style pixel-level (Level-2) cloud retrievals for SEVIRI with comparisons to MODIS.

  18. An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning

    PubMed Central

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-01-01

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197

  19. An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.

    PubMed

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-08-21

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.

  20. A novel rotational invariants target recognition method for rotating motion blurred images

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen

    2017-11-01

    The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.

  1. Efficient error correction for next-generation sequencing of viral amplicons

    PubMed Central

    2012-01-01

    Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430

  2. Efficient error correction for next-generation sequencing of viral amplicons.

    PubMed

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  3. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  4. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  5. Noise reduction and image enhancement using a hardware implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    David, Robert; Williams, Erin; de Tremiolles, Ghislain; Tannhof, Pascal

    1999-03-01

    In this paper, we present a neural based solution developed for noise reduction and image enhancement using the ZISC, an IBM hardware processor which implements the Restricted Coulomb Energy algorithm and the K-Nearest Neighbor algorithm. Artificial neural networks present the advantages of processing time reduction in comparison with classical models, adaptability, and the weighted property of pattern learning. The goal of the developed application is image enhancement in order to restore old movies (noise reduction, focus correction, etc.), to improve digital television images, or to treat images which require adaptive processing (medical images, spatial images, special effects, etc.). Image results show a quantitative improvement over the noisy image as well as the efficiency of this system. Further enhancements are being examined to improve the output of the system.

  6. Automated tissue classification of pediatric brains from magnetic resonance images using age-specific atlases

    NASA Astrophysics Data System (ADS)

    Metzger, Andrew; Benavides, Amanda; Nopoulos, Peg; Magnotta, Vincent

    2016-03-01

    The goal of this project was to develop two age appropriate atlases (neonatal and one year old) that account for the rapid growth and maturational changes that occur during early development. Tissue maps from this age group were initially created by manually correcting the resulting tissue maps after applying an expectation maximization (EM) algorithm and an adult atlas to pediatric subjects. The EM algorithm classified each voxel into one of ten possible tissue types including several subcortical structures. This was followed by a novel level set segmentation designed to improve differentiation between distal cortical gray matter and white matter. To minimize the req uired manual corrections, the adult atlas was registered to the pediatric scans using high -dimensional, symmetric image normalization (SyN) registration. The subject images were then mapped to an age specific atlas space, again using SyN registration, and the resulting transformation applied to the manually corrected tissue maps. The individual maps were averaged in the age specific atlas space and blurred to generate the age appropriate anatomical priors. The resulting anatomical priors were then used by the EM algorithm to re-segment the initial training set as well as an independent testing set. The results from the adult and age-specific anatomical priors were compared to the manually corrected results. The age appropriate atlas provided superior results as compared to the adult atlas. The image analysis pipeline used in this work was built using the open source software package BRAINSTools.

  7. Meterological correction of optical beam refraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukin, V.P.; Melamud, A.E.; Mironov, V.L.

    1986-02-01

    At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits constructionmore » of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.« less

  8. Evaluation of the impact of metal artifacts in CT-based attenuation correction of positron emission tomography scans

    NASA Astrophysics Data System (ADS)

    Wu, Jay; Shih, Cheng-Ting; Chang, Shu-Jun; Huang, Tzung-Chi; Chen, Chuan-Lin; Wu, Tung Hsin

    2011-08-01

    The quantitative ability of PET/CT allows the widespread use in clinical research and cancer staging. However, metal artifacts induced by high-density metal objects degrade the quality of CT images. These artifacts also propagate to the corresponding PET image and cause a false increase of 18F-FDG uptake near the metal implants when the CT-based attenuation correction (AC) is performed. In this study, we applied a model-based metal artifact reduction (MAR) algorithm to reduce the dark and bright streaks in the CT image and compared the differences between PET images with the general CT-based AC (G-AC) and the MAR-corrected-CT AC (MAR-AC). Results showed that the MAR algorithm effectively reduced the metal artifacts in the CT images of the ACR flangeless phantom and two clinical cases. The MAR-AC also removed the false-positive hot spot near the metal implants of the PET images. We conclude that the MAR-AC could be applied in clinical practice to improve the quantitative accuracy of PET images. Additionally, further use of PET/CT fusion images with metal artifact correction could be more valuable for diagnosis.

  9. The Algorithm Theoretical Basis Document for Tidal Corrections

    NASA Technical Reports Server (NTRS)

    Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`

    2012-01-01

    This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.

  10. Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Xiaoyuan; Du, Liang; Duan, Dongliang

    GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less

  11. Ground-Based Correction of Remote-Sensing Spectral Imagery

    NASA Technical Reports Server (NTRS)

    Alder-Golden, Steven M.; Rochford, Peter; Matthew, Michael; Berk, Alexander

    2007-01-01

    Software has been developed for an improved method of correcting for the atmospheric optical effects (primarily, effects of aerosols and water vapor) in spectral images of the surface of the Earth acquired by airborne and spaceborne remote-sensing instruments. In this method, the variables needed for the corrections are extracted from the readings of a radiometer located on the ground in the vicinity of the scene of interest. The software includes algorithms that analyze measurement data acquired from a shadow-band radiometer. These algorithms are based on a prior radiation transport software model, called MODTRAN, that has been developed through several versions up to what are now known as MODTRAN4 and MODTRAN5 . These components have been integrated with a user-friendly Interactive Data Language (IDL) front end and an advanced version of MODTRAN4. Software tools for handling general data formats, performing a Langley-type calibration, and generating an output file of retrieved atmospheric parameters for use in another atmospheric-correction computer program known as FLAASH have also been incorporated into the present soft-ware. Concomitantly with the soft-ware described thus far, there has been developed a version of FLAASH that utilizes the retrieved atmospheric parameters to process spectral image data.

  12. Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach

    DOE PAGES

    Fan, Xiaoyuan; Du, Liang; Duan, Dongliang

    2017-02-01

    GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less

  13. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  14. Correction of geometric distortion in Propeller echo planar imaging using a modified reversed gradient approach

    PubMed Central

    Chang, Hing-Chiu; Chuang, Tzu-Chao; Wang, Fu-Nien; Huang, Teng-Yi; Chung, Hsiao-Wen

    2013-01-01

    Objective This study investigates the application of a modified reversed gradient algorithm to the Propeller-EPI imaging method (periodically rotated overlapping parallel lines with enhanced reconstruction based on echo-planar imaging readout) for corrections of geometric distortions due to the EPI readout. Materials and methods Propeller-EPI acquisition was executed with 360-degree rotational coverage of the k-space, from which the image pairs with opposite phase-encoding gradient polarities were extracted for reversed gradient geometric and intensity corrections. The spatial displacements obtained on a pixel-by-pixel basis were fitted using a two-dimensional polynomial followed by low-pass filtering to assure correction reliability in low-signal regions. Single-shot EPI images were obtained on a phantom, whereas high spatial resolution T2-weighted and diffusion tensor Propeller-EPI data were acquired in vivo from healthy subjects at 3.0 Tesla, to demonstrate the effectiveness of the proposed algorithm. Results Phantom images show success of the smoothed displacement map concept in providing improvements of the geometric corrections at low-signal regions. Human brain images demonstrate prominently superior reconstruction quality of Propeller-EPI images with modified reversed gradient corrections as compared with those obtained without corrections, as evidenced from verification against the distortion-free fast spin-echo images at the same level. Conclusions The modified reversed gradient method is an effective approach to obtain high-resolution Propeller-EPI images with substantially reduced artifacts. PMID:23630654

  15. Highlights of TOMS Version 9 Total Ozone Algorithm

    NASA Technical Reports Server (NTRS)

    Bhartia, Pawan; Haffner, David

    2012-01-01

    The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side benefit of this algorithm is that it is considerably simpler than the present algorithm that uses a database of 1512 profiles to retrieve total ozone. These profiles are tedious to construct and modify. Though conceptually similar to the SBUV V8 algorithm that was developed about a decade ago, the SBUV and TOMS V9 algorithms differ in detail. The TOMS algorithm uses 3 wavelengths to retrieve the profile while the SBUV algorithm uses 6-9 wavelengths, so TOMS provides less profile information. However both algorithms have comparable total ozone information and TOMS V9 can be easily adapted to use additional wavelengths from instruments like GOME, OMI and OMPS to provide better profile information at smaller SZAs. The other significant difference between the two algorithms is that while the SBUV algorithm has been optimized for deriving monthly zonal means by making an appropriate choice of the a priori error covariance matrix, the TOMS algorithm has been optimized for tracking short-term variability using month and latitude dependent covariance matrices.

  16. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    McHugh, Martin J.; Gordley, Larry L.; Russell, James M., III; Hervig, Mark E.

    1999-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Soundings." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first-year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multi-channel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  17. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    Thompson, Robert Earl; McHugh, Martin J.; Gordley, Larry L.; Hervig, Mark E.; Russell, James M., III; Douglass, Anne (Technical Monitor)

    2001-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth Upper Atmospheric Research Satellite (UARS) Science Investigator Program entitled 'HALOE Algorithm Improvements for Upper Tropospheric Sounding.' The goal of this effort is to develop and implement major inversion and processing improvements that will extend Halogen Occultation Experiment (HALOE) measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  18. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  19. Accelerating MP2C dispersion corrections for dimers and molecular crystals

    NASA Astrophysics Data System (ADS)

    Huang, Yuanhang; Shao, Yihan; Beran, Gregory J. O.

    2013-06-01

    The MP2C dispersion correction of Pitonak and Hesselmann [J. Chem. Theory Comput. 6, 168 (2010)], 10.1021/ct9005882 substantially improves the performance of second-order Møller-Plesset perturbation theory for non-covalent interactions, albeit with non-trivial computational cost. Here, the MP2C correction is computed in a monomer-centered basis instead of a dimer-centered one. When applied to a single dimer MP2 calculation, this change accelerates the MP2C dispersion correction several-fold while introducing only trivial new errors. More significantly, in the context of fragment-based molecular crystal studies, combination of the new monomer basis algorithm and the periodic symmetry of the crystal reduces the cost of computing the dispersion correction by two orders of magnitude. This speed-up reduces the MP2C dispersion correction calculation from a significant computational expense to a negligible one in crystals like aspirin or oxalyl dihydrazide, without compromising accuracy.

  20. Registration of central paths and colonic polyps between supine and prone scans in computed tomography colonography: Pilot study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Ping; Napel, Sandy; Acar, Burak

    2004-10-01

    Computed tomography colonography (CTC) is a minimally invasive method that allows the evaluation of the colon wall from CT sections of the abdomen/pelvis. The primary goal of CTC is to detect colonic polyps, precursors to colorectal cancer. Because imperfect cleansing and distension can cause portions of the colon wall to be collapsed, covered with water, and/or covered with retained stool, patients are scanned in both prone and supine positions. We believe that both reading efficiency and computer aided detection (CAD) of CTC images can be improved by accurate registration of data from the supine and prone positions. We developed amore » two-stage approach that first registers the colonic central paths using a heuristic and automated algorithm and then matches polyps or polyp candidates (CAD hits) by a statistical approach. We evaluated the registration algorithm on 24 patient cases. After path registration, the mean misalignment distance between prone and supine identical anatomic landmarks was reduced from 47.08 to 12.66 mm, a 73% improvement. The polyp registration algorithm was specifically evaluated using eight patient cases for which radiologists identified polyps separately for both supine and prone data sets, and then manually registered corresponding pairs. The algorithm correctly matched 78% of these pairs without user input. The algorithm was also applied to the 30 highest-scoring CAD hits in the prone and supine scans and showed a success rate of 50% in automatically registering corresponding polyp pairs. Finally, we computed the average number of CAD hits that need to be manually compared in order to find the correct matches among the top 30 CAD hits. With polyp registration, the average number of comparisons was 1.78 per polyp, as opposed to 4.28 comparisons without polyp registration.« less

  1. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  2. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemkiewicz, J; Palmiotti, A; Miner, M

    2014-06-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less

  3. A survey of provably correct fault-tolerant clock synchronization techniques

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1988-01-01

    Six provably correct fault-tolerant clock synchronization algorithms are examined. These algorithms are all presented in the same notation to permit easier comprehension and comparison. The advantages and disadvantages of the different techniques are examined and issues related to the implementation of these algorithms are discussed. The paper argues for the use of such algorithms in life-critical applications.

  4. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound.

    PubMed

    Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-10-01

    To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy.

  5. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound

    PubMed Central

    Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-01-01

    Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. Conclusions: The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy. PMID:23039661

  6. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  7. Improved non-LTE simulation algorithm

    NASA Astrophysics Data System (ADS)

    Busquet, Michel; Klapisch, Marcel; Colombant, Denis; Fyfe, David; Gardner, John

    2008-11-01

    The RAdiation Dependent Ionization Model (RADIOM)- a.k.a Busquet's model-[1] has proven its success in simulating non --LTE effects in laser fusion plasmas [2]. This improved algorithm can take into account Auger effect by a new parameter fitted to SCROLL [3] results. It is independent of the photon binning thanks to a projection on a standard grid. It guarantees smoother convergence to LTE. This algorithm has been implemented in a new way in the hydro-code FASTnD. Hydro simulations on the recent subMJ targets[4], with and without non-LTE corrections will be shown. [1] M. Busquet, Phys. Fluids B 5, 4191(1993). [2] D.G. Colombant et al, Phys. Plas. 7,2046 (2000). [3] A. Bar-Shalom, J. Oreg M. Klapisch, J. Quant. Spectr. Rad. Transf. 65 ,43 (2000). [4] S. P. Obenschain, D. G. Colombant, A. J. Schmitt et al., Phys. Plasmas 13, 056320 (2006).

  8. Automated target classification in high resolution dual frequency sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2007-04-01

    An improved computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The classified objects of 2 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution dual frequency sonar imagery. Three significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a Box-Cox nonlinear feature LLRT fusion algorithm was developed. The Box-Cox transformation consists of raising the features to a to-be-determined power. Third, a repeated application of a subset feature selection / feature orthogonalization / Volterra feature LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the CAD/CAC processing strings outperforms summing, baseline single-stage Volterra and Box-Cox feature LLRT algorithms, yielding significant improvements over the best single CAD/CAC processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate. Additionally, the robustness of cascaded Volterra feature fusion was demonstrated, by showing that the algorithm yields similar performance with the training and test sets.

  9. A decision support system and rule-based algorithm to augment the human interpretation of the 12-lead electrocardiogram.

    PubMed

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Guldenring, Daniel; Badilini, Fabio; Libretti, Guido; Peace, Aaron J; Leslie, Stephen J

    The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. To improve interpretation accuracy and reduce missed co-abnormalities. The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct interpretation more often than humans, and 3) as many as 7 computerised diagnostic suggestions augmented human decision making in ECG interpretation. Statistical significance may be achieved by expanding sample size. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  11. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    PubMed

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  12. An Adaptive Method for Switching between Pedestrian/Car Indoor Positioning Algorithms based on Multilayer Time Sequences

    PubMed Central

    Gu, Zhining; Guo, Wei; Li, Chaoyang; Zhu, Xinyan; Guo, Tao

    2018-01-01

    Pedestrian dead reckoning (PDR) positioning algorithms can be used to obtain a target’s location only for movement with step features and not for driving, for which the trilateral Bluetooth indoor positioning method can be used. In this study, to obtain the precise locations of different states (pedestrian/car) using the corresponding positioning algorithms, we propose an adaptive method for switching between the PDR and car indoor positioning algorithms based on multilayer time sequences (MTSs). MTSs, which consider the behavior context, comprise two main aspects: filtering of noisy data in small-scale time sequences and using a state chain to reduce the time delay of algorithm switching in large-scale time sequences. The proposed method can be expected to realize the recognition of stationary, walking, driving, or other states; switch to the correct indoor positioning algorithm; and improve the accuracy of localization compared to using a single positioning algorithm. Our experiments show that the recognition of static, walking, driving, and other states improves by 5.5%, 45.47%, 26.23%, and 21% on average, respectively, compared with convolutional neural network (CNN) method. The time delay decreases by approximately 0.5–8.5 s for the transition between states and by approximately 24 s for the entire process. PMID:29495503

  13. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  14. Optimized MLAA for quantitative non-TOF PET/MR of the brain

    NASA Astrophysics Data System (ADS)

    Benoit, Didier; Ladefoged, Claes N.; Rezaei, Ahmadreza; Keller, Sune H.; Andersen, Flemming L.; Højgaard, Liselotte; Hansen, Adam E.; Holm, Søren; Nuyts, Johan

    2016-12-01

    For quantitative tracer distribution in positron emission tomography, attenuation correction is essential. In a hybrid PET/CT system the CT images serve as a basis for generation of the attenuation map, but in PET/MR, the MR images do not have a similarly simple relationship with the attenuation map. Hence attenuation correction in PET/MR systems is more challenging. Typically either of two MR sequences are used: the Dixon or the ultra-short time echo (UTE) techniques. However these sequences have some well-known limitations. In this study, a reconstruction technique based on a modified and optimized non-TOF MLAA is proposed for PET/MR brain imaging. The idea is to tune the parameters of the MLTR applying some information from an attenuation image computed from the UTE sequences and a T1w MR image. In this MLTR algorithm, an {αj} parameter is introduced and optimized in order to drive the algorithm to a final attenuation map most consistent with the emission data. Because the non-TOF MLAA is used, a technique to reduce the cross-talk effect is proposed. In this study, the proposed algorithm is compared to the common reconstruction methods such as OSEM using a CT attenuation map, considered as the reference, and OSEM using the Dixon and UTE attenuation maps. To show the robustness and the reproducibility of the proposed algorithm, a set of 204 [18F]FDG patients, 35 [11C]PiB patients and 1 [18F]FET patient are used. The results show that by choosing an optimized value of {αj} in MLTR, the proposed algorithm improves the results compared to the standard MR-based attenuation correction methods (i.e. OSEM using the Dixon or the UTE attenuation maps), and the cross-talk and the scale problem are limited.

  15. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  16. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanke, Monika, E-mail: monika@fizyka.umk.pl; Palikot, Ewa, E-mail: epalikot@doktorant.umk.pl; Adamowicz, Ludwik, E-mail: ludwik@email.arizona.edu

    2016-05-07

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H{sub 2} and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  17. Implementation of a three-qubit refined Deutsch Jozsa algorithm using SFG quantum logic gates

    NASA Astrophysics Data System (ADS)

    DelDuce, A.; Savory, S.; Bayvel, P.

    2006-05-01

    In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another.

  18. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, H.M.; Reed, I.S.

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less

  19. The Assessment of Atmospheric Correction Processors for MERIS Based on In-Situ Measurements-Updates in OC-CCI Round Robin

    NASA Astrophysics Data System (ADS)

    Muller, Dagmar; Krasemann, Hajo; Zuhilke, Marco; Doerffer, Roland; Brockmann, Carsten; Steinmetz, Francois; Valente, Andre; Brotas, Vanda; Grant, kMicheal G.; Sathyendranath, Shubha; Melin, Frederic; Franz, Bryan A.; Mazeran, Constant; Regner, Peter

    2016-08-01

    The Ocean Colour Climate Change Initiative (OC- CCI) provides a long-term time series of ocean colour data and investigates the detectable climate impact. A reliable and stable atmospheric correction (AC) procedure is the basis for ocean colour products of the necessary high quality.The selection of atmospheric correction processors is repeated regularly based on a round robin exercise, at the latest when a revised production and release of the OC-CCI merged product is scheduled. Most of the AC processors are under constant development and changes are implemented to improve the quality of satellite-derived retrievals of remote sensing reflectances. The changes between versions of the inter-comparison are not restricted to the implementation of AC processors. There are activities to improve the quality flagging for some processors, and the system vicarious calibration for AC algorithms in their sensor specific behaviour are widely studied. Each inter-comparison starts with an updated in-situ database, as more spectra are included in order to broaden the temporal and spatial range of satellite match-ups. While the OC-CCI's focus has laid on case-1 waters in the past, it has expanded to the retrieval of case-2 products now. In light of this goal, new bidirectional correction procedures (normalisation) for the remote sensing spectra have been introduced. As in-situ measurements are not always available at the satellite sensor specific central wave- lengths, a band-shift algorithm has to be applied to the dataset.In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite-derived water leaving reflectance spectra, is aided by a ranking system. In principal, the statistical parameters are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results.A comparison of round robin results for the OC-CCI version 2 and the current version 3 is presented and some major changes are highlighted.

  20. A Cascaded Approach for Correcting Ionospheric Contamination with Large Amplitude in HF Skywave Radars

    PubMed Central

    Wei, Yinsheng; Guo, Rujiang; Xu, Rongqing; Tang, Xiudong

    2014-01-01

    Ionospheric phase perturbation with large amplitude causes broadening sea clutter's Bragg peaks to overlap each other; the performance of traditional decontamination methods about filtering Bragg peak is poor, which greatly limits the detection performance of HF skywave radars. In view of the ionospheric phase perturbation with large amplitude, this paper proposes a cascaded approach based on improved S-method to correct the ionospheric phase contamination. This approach consists of two correction steps. At the first step, a time-frequency distribution method based on improved S-method is adopted and an optimal detection method is designed to obtain a coarse ionospheric modulation estimation from the time-frequency distribution. At the second correction step, based on the phase gradient algorithm (PGA) is exploited to eliminate the residual contamination. Finally, use the measured data to verify the effectiveness of the method. Simulation results show the time-frequency resolution of this method is high and is not affected by the interference of the cross term; ionospheric phase perturbation with large amplitude can be corrected in low signal-to-noise (SNR); such a cascade correction method has a good effect. PMID:24578656

  1. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography.

    PubMed

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-09-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.

  2. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography

    PubMed Central

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-01-01

    Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037

  3. Advantages of estimating rate corrections during dynamic propagation of spacecraft rates: Applications to real-time attitude determination of SAMPEX

    NASA Technical Reports Server (NTRS)

    Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.

    1994-01-01

    This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.

  4. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  5. Detection and correction for EPID and gantry sag during arc delivery using cine EPID imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowshanfarzad, Pejman; Sabet, Mahsheed; O'Connor, Daryl J.

    2012-02-15

    Purpose: Electronic portal imaging devices (EPIDs) have been studied and used for pretreatment and in-vivo dosimetry applications for many years. The application of EPIDs for dosimetry in arc treatments requires accurate characterization of the mechanical sag of the EPID and gantry during rotation. Several studies have investigated the effects of gravity on the sag of these systems but each have limitations. In this study, an easy experiment setup and accurate algorithm have been introduced to characterize and correct for the effect of EPID and gantry sag during arc delivery. Methods: Three metallic ball bearings were used as markers in themore » beam: two of them fixed to the gantry head and the third positioned at the isocenter. EPID images were acquired during a 360 deg. gantry rotation in cine imaging mode. The markers were tracked in EPID images and a robust in-house developed MATLAB code was used to analyse the images and find the EPID sag in three directions as well as the EPID + gantry sag by comparison to the reference gantry zero image. The algorithm results were then tested against independent methods. The method was applied to compare the effect in clockwise and counter clockwise gantry rotations and different source-to-detector distances (SDDs). The results were monitored for one linear accelerator over a course of 15 months and six other linear-accelerators from two treatment centers were also investigated using this method. The generalized shift patterns were derived from the data and used in an image registration algorithm to correct for the effect of the mechanical sag in the system. The Gamma evaluation (3%, 3 mm) technique was used to investigate the improvement in alignment of cine EPID images of a fixed field, by comparing both individual images and the sum of images in a series with the reference gantry zero image. Results: The mechanical sag during gantry rotation was dependent on the gantry angle and was larger in the in-plane direction, although the patterns were not identical for various linear-accelerators. The reproducibility of measurements was within 0.2 mm over a period of 15 months. The direction of gantry rotation and SDD did not affect the results by more than 0.3 mm. Results of independent tests agreed with the algorithm within the accuracy of the measurement tools. When comparing summed images, the percentage of points with Gamma index <1 increased from 85.4% to 94.1% after correcting for the EPID sag, and to 99.3% after correction for gantry + EPID sag. Conclusions: The measurement method and algorithms introduced in this study use cine-images, are highly accurate, simple, fast, and reproducible. It tests all gantry angles and provides a suitable automatic analysis and correction tool to improve EPID dosimetry and perform comprehensive linac QA for arc treatments.« less

  6. [Quality of documentation of intraoperative and postoperative complications : improvement of documentation for a nationwide quality assurance program and comparison with routine data].

    PubMed

    Jakob, J; Marenda, D; Sold, M; Schlüter, M; Post, S; Kienle, P

    2014-08-01

    Complications after cholecystectomy are continuously documented in a nationwide database in Germany. Recent studies demonstrated a lack of reliability of these data. The aim of the study was to evaluate the impact of a control algorithm on documentation quality and the use of routine diagnosis coding as an additional validation instrument. Completeness and correctness of the documentation of complications after cholecystectomy was compared over a time interval of 12 months before and after implementation of an algorithm for faster and more accurate documentation. Furthermore, the coding of all diagnoses was screened to identify intraoperative and postoperative complications. The sensitivity of the documentation for complications improved from 46 % to 70 % (p = 0.05, specificity 98 % in both time intervals). A prolonged time interval of more than 6 weeks between patient discharge and documentation was associated with inferior data quality (incorrect documentation in 1.5 % versus 15 %, p < 0.05). The rate of case documentation within the 6 weeks after hospital discharge was clearly improved after implementation of the control algorithm. Sensitivity and specificity of screening for complications by evaluating routine diagnoses coding were 70 % and 85 %, respectively. The quality of documentation was improved by implementation of a simple memory algorithm.

  7. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)].

    PubMed

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-07

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  8. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-01

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  9. [Critical analysis of French DRG based information system (PMSI) databases for the epidemiology of cancer: a longitudinal approach becomes possible].

    PubMed

    Olive, F; Gomez, F; Schott, A-M; Remontet, L; Bossard, N; Mitton, N; Polazzi, S; Colonna, M; Trombert-Paviot, B

    2011-02-01

    Use of French Diagnosis Related Groups (DRGs) program databases, apart from financial purposes, has recently been improved since a unique anonymous patient identification number has been created for each inpatient in administrative case mix database. Based on the work of the group for cancer epidemiological observation in the Rhône-Alpes area, (ONC-EPI group), we review the remaining difficulties in the use of DRG data for epidemiological purposes and we consider a longitudinal approach based on analysis of database over several years. We also discuss limitations of this approach. The main problems are related to a lack of quality of administrative data, especially coding of diagnoses. These errors come from missing or inappropriate codes, or not being in accordance with prioritization rules (causing an over- or under-reporting or inconsistencies in coding over time). One difficulty, partly due to the hierarchy of coding and the type of cancer, is the choice of an extraction algorithm. In two studies designed to estimate the incidence of cancer cared in hospitals (breast, colon-rectum, kidney, ovaries), a first algorithm, including a code of cancer as principal diagnosis with a selection of surgical procedures less performed than the second one including a code of cancer as principal diagnosis only, for which the number of hospitalizations per patient ratio was stable across time and space. The chaining over several years allows, by tracing the trajectory of the patient, to detect and correct inaccuracies, errors and missing values, and for incidence studies, to correct incident cases by removing prevalent cases. However, linkage, complete only since 2007, does not correct data in all cases. Ways of future improvement certainly pass through improved algorithms for case identification and especially by linking DRG data with other databases. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearlymore » the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  12. The correction of time and temperature effects in MR-based 3D Fricke xylenol orange dosimetry.

    PubMed

    Welch, Mattea L; Jaffray, David A

    2017-04-21

    Previously developed MR-based three-dimensional (3D) Fricke-xylenol orange (FXG) dosimeters can provide end-to-end quality assurance and validation protocols for pre-clinical radiation platforms. FXG dosimeters quantify ionizing irradiation induced oxidation of Fe 2+ ions using pre- and post-irradiation MR imaging methods that detect changes in spin-lattice relaxation rates (R 1   =  [Formula: see text]) caused by irradiation induced oxidation of Fe 2+ . Chemical changes in MR-based FXG dosimeters that occur over time and with changes in temperature can decrease dosimetric accuracy if they are not properly characterized and corrected. This paper describes the characterization, development and utilization of an empirical model-based correction algorithm for time and temperature effects in the context of a pre-clinical irradiator and a 7 T pre-clinical MR imaging system. Time and temperature dependent changes of R 1 values were characterized using variable TR spin-echo imaging. R 1 -time and R 1 -temperature dependencies were fit using non-linear least squares fitting methods. Models were validated using leave-one-out cross-validation and resampling. Subsequently, a correction algorithm was developed that employed the previously fit empirical models to predict and reduce baseline R 1 shifts that occurred in the presence of time and temperature changes. The correction algorithm was tested on R 1 -dose response curves and 3D dose distributions delivered using a small animal irradiator at 225 kVp. The correction algorithm reduced baseline R 1 shifts from  -2.8  ×  10 -2 s -1 to 1.5  ×  10 -3 s -1 . In terms of absolute dosimetric performance as assessed with traceable standards, the correction algorithm reduced dose discrepancies from approximately 3% to approximately 0.5% (2.90  ±  2.08% to 0.20  ±  0.07%, and 2.68  ±  1.84% to 0.46  ±  0.37% for the 10  ×  10 and 8  ×  12 mm 2 fields, respectively). Chemical changes in MR-based FXG dosimeters produce time and temperature dependent R 1 values for the time intervals and temperature changes found in a typical small animal imaging and irradiation laboratory setting. These changes cause baseline R 1 shifts that negatively affect dosimeter accuracy. Characterization, modeling and correction of these effects improved in-field reported dose accuracy to less than 1% when compared to standardized ion chamber measurements.

  13. Influence of the micro-physical properties of the aerosol on the atmospheric correction of OLI data acquired over desert area

    NASA Astrophysics Data System (ADS)

    Manzo, Ciro; Bassani, Cristiana

    2016-04-01

    This paper focuses on the evaluation of surface reflectance obtained by different atmospheric correction algorithms of the Landsat 8 OLI data considering or not the micro-physical properties of the aerosol when images are acquired in desert area located in South-West of Nile delta. The atmospheric correction of remote sensing data was shown to be sensitive to the aerosol micro-physical properties, as reported in Bassani et al., 2012. In particular, the role of the aerosol micro-physical properties on the accuracy of the atmospheric correction of remote sensing data was investigated [Bassani et al., 2015; Tirelli et al., 2015]. In this work, the OLI surface reflectance was retrieved by the developed OLI@CRI (OLI ATmospherically Corrected Reflectance Imagery) physically-based atmospheric correction which considers the aerosol micro-physical properties available from the two AERONET stations [Holben et al., 1998] close to the study area (El_Farafra and Cairo_EMA_2). The OLI@CRI algorithm is based on 6SV radiative transfer model, last generation of the Second Simulation of a Satellite Signal in the Solar Spectrum (6S) radiative transfer code [Kotchenova et al., 2007; Vermote et al., 1997], specifically developed for Landsat 8 OLI data. The OLI reflectance obtained by the OLI@CRI was compared with reflectance obtained by other atmospheric correction algorithms which do not consider micro-physical properties of aerosol (DOS) or take on aerosol standard models (FLAASH, implemented in ENVI software). The accuracy of the surface reflectance retrieved by different algorithms were calculated by comparing the spatially resampled OLI images with the MODIS surface reflectance products. Finally, specific image processing was applied to the OLI reflectance images in order to compare remote sensing products obtained for same scene. The results highlight the influence of the physical characterization of aerosol on the OLI data improving the retrieved atmospherically corrected reflectance. One of the most important outreach of this research is the retrieval of the highest possible accuracy of the OLI reflectance for land surface variables by spectral indices. Consequently if OLI@CRI algorithm is applied to time series data, the uncertainty into the time curve can be reduced. Kotchenova and Vermote, 2007. Appl. Opt. doi:10.1364/AO.46.004455. Vermote et al., 1997. IEEE Trans. Geosci. Remote Sens. doi:10.1109/36.581987. Bassani et al., 2015. Atmos. Meas. Tech. doi:10.5194/amt-8-1593-2015. Bassani et al., 2012. Atmos. Meas. Tech. doi:10.5194/amt-5-1193-2012. Tirelli et al., 2015. Remote Sens. doi:10.3390/rs70708391. Holben et al., 1998. Rem. Sens. Environ. doi:10.1016/S0034-4257(98)00031-5.

  14. [Improving apple fruit quality predictions by effective correction of Vis-NIR laser diffuse reflecting images].

    PubMed

    Qing, Zhao-shen; Ji, Bao-ping; Shi, Bo-lin; Zhu, Da-zhou; Tu, Zhen-hua; Zude, Manuela

    2008-06-01

    In the present study, improved laser-induced light backscattering imaging was studied regarding its potential for analyzing apple SSC and fruit flesh firmness. Images of the diffuse reflection of light on the fruit surface were obtained from Fuji apples using laser diodes emitting at five wavelength bands (680, 780, 880, 940 and 980 nm). Image processing algorithms were tested to correct for dissimilar equator and shape of fruit, and partial least squares (PLS) regression analysis was applied to calibrate on the fruit quality parameter. In comparison to the calibration based on corrected frequency with the models built by raw data, the former improved r from 0. 78 to 0.80 and from 0.87 to 0.89 for predicting SSC and firmness, respectively. Comparing models based on mean value of intensities with results obtained by frequency of intensities, the latter gave higher performance for predicting Fuji SSC and firmness. Comparing calibration for predicting SSC based on the corrected frequency of intensities and the results obtained from raw data set, the former improved root mean of standard error of prediction (RMSEP) from 1.28 degrees to 0.84 degrees Brix. On the other hand, in comparison to models for analyzing flesh firmness built by means of corrected frequency of intensities with the calibrations based on raw data, the former gave the improvement in RMSEP from 8.23 to 6.17 N x cm(-2).

  15. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  16. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  17. An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids

    NASA Astrophysics Data System (ADS)

    Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun

    2017-07-01

    Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.

  18. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    PubMed

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  19. A robust data scaling algorithm to improve classification accuracies in biomedical data.

    PubMed

    Cao, Xi Hang; Stojkovic, Ivan; Obradovic, Zoran

    2016-09-09

    Machine learning models have been adapted in biomedical research and practice for knowledge discovery and decision support. While mainstream biomedical informatics research focuses on developing more accurate models, the importance of data preprocessing draws less attention. We propose the Generalized Logistic (GL) algorithm that scales data uniformly to an appropriate interval by learning a generalized logistic function to fit the empirical cumulative distribution function of the data. The GL algorithm is simple yet effective; it is intrinsically robust to outliers, so it is particularly suitable for diagnostic/classification models in clinical/medical applications where the number of samples is usually small; it scales the data in a nonlinear fashion, which leads to potential improvement in accuracy. To evaluate the effectiveness of the proposed algorithm, we conducted experiments on 16 binary classification tasks with different variable types and cover a wide range of applications. The resultant performance in terms of area under the receiver operation characteristic curve (AUROC) and percentage of correct classification showed that models learned using data scaled by the GL algorithm outperform the ones using data scaled by the Min-max and the Z-score algorithm, which are the most commonly used data scaling algorithms. The proposed GL algorithm is simple and effective. It is robust to outliers, so no additional denoising or outlier detection step is needed in data preprocessing. Empirical results also show models learned from data scaled by the GL algorithm have higher accuracy compared to the commonly used data scaling algorithms.

  20. Direct infusion mass spectrometry metabolomics dataset: a benchmark for data processing and quality control

    PubMed Central

    Kirwan, Jennifer A; Weber, Ralf J M; Broadhurst, David I; Viant, Mark R

    2014-01-01

    Direct-infusion mass spectrometry (DIMS) metabolomics is an important approach for characterising molecular responses of organisms to disease, drugs and the environment. Increasingly large-scale metabolomics studies are being conducted, necessitating improvements in both bioanalytical and computational workflows to maintain data quality. This dataset represents a systematic evaluation of the reproducibility of a multi-batch DIMS metabolomics study of cardiac tissue extracts. It comprises of twenty biological samples (cow vs. sheep) that were analysed repeatedly, in 8 batches across 7 days, together with a concurrent set of quality control (QC) samples. Data are presented from each step of the workflow and are available in MetaboLights. The strength of the dataset is that intra- and inter-batch variation can be corrected using QC spectra and the quality of this correction assessed independently using the repeatedly-measured biological samples. Originally designed to test the efficacy of a batch-correction algorithm, it will enable others to evaluate novel data processing algorithms. Furthermore, this dataset serves as a benchmark for DIMS metabolomics, derived using best-practice workflows and rigorous quality assessment. PMID:25977770

  1. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Control algorithms and applications of the wavefront sensorless adaptive optics

    NASA Astrophysics Data System (ADS)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  3. Validation of a Quantitative Single-Subject Based Evaluation for Rehabilitation-Induced Improvement Assessment.

    PubMed

    Gandolla, Marta; Molteni, Franco; Ward, Nick S; Guanziroli, Eleonora; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2015-11-01

    The foreseen outcome of a rehabilitation treatment is a stable improvement on the functional outcomes, which can be longitudinally assessed through multiple measures to help clinicians in functional evaluation. In this study, we propose an automatic comprehensive method of combining multiple measures in order to assess a functional improvement. As test-bed, a functional electrical stimulation based treatment for foot drop correction performed with chronic post-stroke participants is presented. Patients were assessed on five relevant outcome measures before, after intervention, and at a follow-up time-point. A novel algorithm based on variables minimum detectable change is proposed and implemented in a custom-made software, combining the outcome measures to obtain a unique parameter: capacity score. The difference between capacity scores at different timing is three holded to obtain improvement evaluation. Ten clinicians evaluated patients on the Improvement Clinical Global Impression scale. Eleven patients underwent the treatment, and five resulted to achieve a stable functional improvement, as assessed by the proposed algorithm. A statistically significant agreement between intra-clinicians and algorithm-clinicians evaluations was demonstrated. The proposed method evaluates functional improvement on a single-subject yes/no base by merging different measures (e.g., kinematic, muscular) and it is validated against clinical evaluation.

  4. WE-G-18C-05: Characterization of Cross-Vendor, Cross-Field Strength MR Image Intensity Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, E; Prah, D

    2014-06-15

    Purpose: Variations in MR image intensity and image intensity nonuniformity (IINU) can challenge the accuracy of intensity-based image segmentation and registration algorithms commonly applied in radiotherapy. The goal of this work was to characterize MR image intensity variations across scanner vendors and field strengths commonly used in radiotherapy. Methods: ACR-MRI phantom images were acquired at 1.5T and 3.0T on GE (450w and 750, 23.1), Siemens (Espree and Verio, VB17B), and Philips (Ingenia, 4.1.3) scanners using commercial spin-echo sequences with matched parameters (TE/TR: 20/500 ms, rBW: 62.5 kHz, TH/skip: 5/5mm). Two radiofrequency (RF) coil combinations were used for each scanner: bodymore » coil alone, and combined body and phased-array head coils. Vendorspecific B1- corrections (PURE/Pre-Scan Normalize/CLEAR) were applied in all head coil cases. Images were transferred offline, corrected for IINU using the MNI N3 algorithm, and normalized. Coefficients of variation (CV=σ/μ) and peak image uniformity (PIU = 1−(Smax−Smin)/(Smax+Smin)) estimates were calculated for one homogeneous phantom slice. Kruskal-Wallis and Wilcoxon matched-pairs tests compared mean MR signal intensities and differences between original and N3 image CV and PIU. Results: Wide variations in both MR image intensity and IINU were observed across scanner vendors, field strengths, and RF coil configurations. Applying the MNI N3 correction for IINU resulted in significant improvements in both CV and PIU (p=0.0115, p=0.0235). However, wide variations in overall image intensity persisted, requiring image normalization to improve consistency across vendors, field strengths, and RF coils. These results indicate that B1- correction routines alone may be insufficient in compensating for IINU and image scaling, warranting additional corrections prior to use of MR images in radiotherapy. Conclusions: MR image intensities and IINU vary as a function of scanner vendor, field strength, and RF coil configuration. A two-step strategy consisting of MNI N3 correction followed by normalization was required to improve MR image consistency. Funding provided by Advancing a Healthier Wisconsin.« less

  5. Real time optimization algorithm for wavefront sensorless adaptive optics OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel J.; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Sarunic, Marinko V.; Verhaegen, Michel; Jian, Yifan

    2017-02-01

    Optical Coherence Tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. A limitation of the performance and utilization of the OCT systems has been the lateral resolution. Through the combination of wavefront sensorless adaptive optics with dual variable optical elements, we present a compact lens based OCT system that is capable of imaging the photoreceptor mosaic. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient eyes, and a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators for aberration correction to obtain near diffraction limited imaging at the retina. A parallel processing computational platform permitted real-time image acquisition and display. The Data-based Online Nonlinear Extremum seeker (DONE) algorithm was used for real time optimization of the wavefront sensorless adaptive optics OCT, and the performance was compared with a coordinate search algorithm. Cross sectional images of the retinal layers and en face images of the cone photoreceptor mosaic acquired in vivo from research volunteers before and after WSAO optimization are presented. Applying the DONE algorithm in vivo for wavefront sensorless AO-OCT demonstrates that the DONE algorithm succeeds in drastically improving the signal while achieving a computational time of 1 ms per iteration, making it applicable for high speed real time applications.

  6. Two Different Approaches to Automated Mark Up of Emotions in Text

    NASA Astrophysics Data System (ADS)

    Francisco, Virginia; Hervás, Raqucl; Gervás, Pablo

    This paper presents two different approaches to automated marking up of texts with emotional labels. For the first approach a corpus of example texts previously annotated by human evaluators is mined for an initial assignment of emotional features to words. This results in a List of Emotional Words (LEW) which becomes a useful resource for later automated mark up. The mark up algorithm in this first approach mirrors closely the steps taken during feature extraction, employing for the actual assignment of emotional features a combination of the LEW resource and WordNet for knowledge-based expansion of words not occurring in LEW. The algorithm for automated mark up is tested against new text samples to test its coverage. The second approach mark up texts during their generation. We have a knowledge base which contains the necessary information for marking up the text. This information is related to actions and characters. The algorithm in this case employ the information of the knowledge database and decides the correct emotion for every sentence. The algorithm for automated mark up is tested against four different texts. The results of the two approaches are compared and discussed with respect to three main issues: relative adequacy of each one of the representations used, correctness and coverage of the proposed algorithms, and additional techniques and solutions that may be employed to improve the results.

  7. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  8. Evaluation and analysis of Seasat-A scanning multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    The brightness temperature data produced by the SMMR final Antenna Pattern Correction (APC) algorithm is discussed. The algorithm consisted of: (1) a direct comparison of the outputs of the final and interim APC algorithms; and (2) an analysis of a possible relationship between observed cross track gradients in the interim brightness temperatures and the asymmetry in the antenna temperature data. Results indicate a bias between the brightness temperature produced by the final and interim APC algorithm.

  9. Updating finite element dynamic models using an element-by-element sensitivity methodology

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Hemez, Francois M.

    1993-01-01

    A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.

  10. Requirements for fault-tolerant factoring on an atom-optics quantum computer.

    PubMed

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2013-01-01

    Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.

  11. Atmospheric correction of the ocean color observations of the medium resolution imaging spectrometer (MERIS)

    NASA Astrophysics Data System (ADS)

    Antoine, David; Morel, Andre

    1997-02-01

    An algorithm is proposed for the atmospheric correction of the ocean color observations by the MERIS instrument. The principle of the algorithm, which accounts for all multiple scattering effects, is presented. The algorithm is then teste, and its accuracy assessed in terms of errors in the retrieved marine reflectances.

  12. Improved ocean-color remote sensing in the Arctic using the POLYMER algorithm

    NASA Astrophysics Data System (ADS)

    Frouin, Robert; Deschamps, Pierre-Yves; Ramon, Didier; Steinmetz, François

    2012-10-01

    Atmospheric correction of ocean-color imagery in the Arctic brings some specific challenges that the standard atmospheric correction algorithm does not address, namely low solar elevation, high cloud frequency, multi-layered polar clouds, presence of ice in the field-of-view, and adjacency effects from highly reflecting surfaces covered by snow and ice and from clouds. The challenges may be addressed using a flexible atmospheric correction algorithm, referred to as POLYMER (Steinmetz and al., 2011). This algorithm does not use a specific aerosol model, but fits the atmospheric reflectance by a polynomial with a non spectral term that accounts for any non spectral scattering (clouds, coarse aerosol mode) or reflection (glitter, whitecaps, small ice surfaces within the instrument field of view), a spectral term with a law in wavelength to the power -1 (fine aerosol mode), and a spectral term with a law in wavelength to the power -4 (molecular scattering, adjacency effects from clouds and white surfaces). Tests are performed on selected MERIS imagery acquired over Arctic Seas. The derived ocean properties, i.e., marine reflectance and chlorophyll concentration, are compared with those obtained with the standard MEGS algorithm. The POLYMER estimates are more realistic in regions affected by the ice environment, e.g., chlorophyll concentration is higher near the ice edge, and spatial coverage is substantially increased. Good retrievals are obtained in the presence of thin clouds, with ocean-color features exhibiting spatial continuity from clear to cloudy regions. The POLYMER estimates of marine reflectance agree better with in situ measurements than the MEGS estimates. Biases are 0.001 or less in magnitude, except at 412 and 443 nm, where they reach 0.005 and 0.002, respectively, and root-mean-squared difference decreases from 0.006 at 412 nm to less than 0.001 at 620 and 665 nm. A first application to MODIS imagery is presented, revealing that the POLYMER algorithm is robust when pixels are contaminated by sea ice.

  13. Can machine-learning improve cardiovascular risk prediction using routine clinical data?

    PubMed Central

    Kai, Joe; Garibaldi, Jonathan M.; Qureshi, Nadeem

    2017-01-01

    Background Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Methods Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). Findings 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723–0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739–0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755–0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755–0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759–0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Conclusions Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others. PMID:28376093

  14. Can machine-learning improve cardiovascular risk prediction using routine clinical data?

    PubMed

    Weng, Stephen F; Reps, Jenna; Kai, Joe; Garibaldi, Jonathan M; Qureshi, Nadeem

    2017-01-01

    Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the 'receiver operating curve' (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723-0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739-0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755-0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755-0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759-0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others.

  15. Segmentation-free empirical beam hardening correction for CT.

    PubMed

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.

  16. Segmentation-free empirical beam hardening correction for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. Results: All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. Conclusions: sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.« less

  17. The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data.

    PubMed

    Puccio, Benjamin; Pooley, James P; Pellman, John S; Taverna, Elise C; Craddock, R Cameron

    2016-10-25

    Skull-stripping is the procedure of removing non-brain tissue from anatomical MRI data. This procedure can be useful for calculating brain volume and for improving the quality of other image processing steps. Developing new skull-stripping algorithms and evaluating their performance requires gold standard data from a variety of different scanners and acquisition methods. We complement existing repositories with manually corrected brain masks for 125 T1-weighted anatomical scans from the Nathan Kline Institute Enhanced Rockland Sample Neurofeedback Study. Skull-stripped images were obtained using a semi-automated procedure that involved skull-stripping the data using the brain extraction based on nonlocal segmentation technique (BEaST) software, and manually correcting the worst results. Corrected brain masks were added into the BEaST library and the procedure was repeated until acceptable brain masks were available for all images. In total, 85 of the skull-stripped images were hand-edited and 40 were deemed to not need editing. The results are brain masks for the 125 images along with a BEaST library for automatically skull-stripping other data. Skull-stripped anatomical images from the Neurofeedback sample are available for download from the Preprocessed Connectomes Project. The resulting brain masks can be used by researchers to improve preprocessing of the Neurofeedback data, as training and testing data for developing new skull-stripping algorithms, and for evaluating the impact on other aspects of MRI preprocessing. We have illustrated the utility of these data as a reference for comparing various automatic methods and evaluated the performance of the newly created library on independent data.

  18. Relative Localization in Wireless Sensor Networks for Measurement of Electric Fields under HVDC Transmission Lines

    PubMed Central

    Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing

    2015-01-01

    In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions. PMID:25658390

  19. Relative localization in wireless sensor networks for measurement of electric fields under HVDC transmission lines.

    PubMed

    Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing

    2015-02-04

    In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions.

  20. Optimization of the linear-scaling local natural orbital CCSD(T) method: Redundancy-free triples correction using Laplace transform.

    PubMed

    Nagy, Péter R; Kállay, Mihály

    2017-06-07

    An improved algorithm is presented for the evaluation of the (T) correction as a part of our local natural orbital (LNO) coupled-cluster singles and doubles with perturbative triples [LNO-CCSD(T)] scheme [Z. Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The new algorithm is an order of magnitude faster than our previous one and removes the bottleneck related to the calculation of the (T) contribution. First, a numerical Laplace transformed expression for the (T) fragment energy is introduced, which requires on average 3 to 4 times fewer floating point operations with negligible compromise in accuracy eliminating the redundancy among the evaluated triples amplitudes. Second, an additional speedup factor of 3 is achieved by the optimization of our canonical (T) algorithm, which is also executed in the local case. These developments can also be integrated into canonical as well as alternative fragmentation-based local CCSD(T) approaches with minor modifications. As it is demonstrated by our benchmark calculations, the evaluation of the new Laplace transformed (T) correction can always be performed if the preceding CCSD iterations are feasible, and the new scheme enables the computation of LNO-CCSD(T) correlation energies with at least triple-zeta quality basis sets for realistic three-dimensional molecules with more than 600 atoms and 12 000 basis functions in a matter of days on a single processor.

  1. Optimization of the linear-scaling local natural orbital CCSD(T) method: Redundancy-free triples correction using Laplace transform

    PubMed Central

    2017-01-01

    An improved algorithm is presented for the evaluation of the (T) correction as a part of our local natural orbital (LNO) coupled-cluster singles and doubles with perturbative triples [LNO-CCSD(T)] scheme [Z. Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The new algorithm is an order of magnitude faster than our previous one and removes the bottleneck related to the calculation of the (T) contribution. First, a numerical Laplace transformed expression for the (T) fragment energy is introduced, which requires on average 3 to 4 times fewer floating point operations with negligible compromise in accuracy eliminating the redundancy among the evaluated triples amplitudes. Second, an additional speedup factor of 3 is achieved by the optimization of our canonical (T) algorithm, which is also executed in the local case. These developments can also be integrated into canonical as well as alternative fragmentation-based local CCSD(T) approaches with minor modifications. As it is demonstrated by our benchmark calculations, the evaluation of the new Laplace transformed (T) correction can always be performed if the preceding CCSD iterations are feasible, and the new scheme enables the computation of LNO-CCSD(T) correlation energies with at least triple-zeta quality basis sets for realistic three-dimensional molecules with more than 600 atoms and 12 000 basis functions in a matter of days on a single processor. PMID:28576082

  2. A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Hu, W.; Ning, J.

    2017-12-01

    Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.

  3. Improvement in the Characterization of MODIS Subframe Difference

    NASA Technical Reports Server (NTRS)

    Li, Yonghong; Angal, Amit; Chen, Na; Geng, Xu; Link, Daniel; Wang, Zhipeng; Wu, Aisheng; Xiong, Xiaoxiong

    2016-01-01

    MODIS is a key instrument of NASA's Earth Observing System. It has successfully operated for 16+ years on the Terra satellite and 14+ years on the Aqua satellite, respectively. MODIS has 36 spectral bands at three different nadir spatial resolutions, 250m (bands 1-2), 500m (bands 3-7), and 1km (bands 8-36). MODIS subframe measurement is designed for bands 1-7 to match their spatial resolution in the scan direction to that of the track direction. Within each 1 km frame, the MODIS 250 m resolution bands sample four subframes and the 500 m resolution bands sample two subframes. The detector gains are calibrated at a subframe level. Due to calibration differences between subframes, noticeable subframe striping is observed in the Level 1B (L1B) products, which exhibit a predominant radiance-level dependence. This paper presents results of subframe differences from various onboard and earth-view data sources (e.g. solar diffuser, electronic calibration, spectro-radiometric calibration assembly, Earth view, etc.). A subframe bias correction algorithm is proposed to minimize the subframe striping in MODIS L1B image. The algorithm has been tested using sample L1B images and the vertical striping at lower radiance value is mitigated after applying the corrections. The subframe bias correction approach will be considered for implementation in future versions of the calibration algorithm.

  4. Circuit Design Optimization Using Genetic Algorithm with Parameterized Uniform Crossover

    NASA Astrophysics Data System (ADS)

    Bao, Zhiguo; Watanabe, Takahiro

    Evolvable hardware (EHW) is a new research field about the use of Evolutionary Algorithms (EAs) to construct electronic systems. EHW refers in a narrow sense to use evolutionary mechanisms as the algorithmic drivers for system design, while in a general sense to the capability of the hardware system to develop and to improve itself. Genetic Algorithm (GA) is one of typical EAs. We propose optimal circuit design by using GA with parameterized uniform crossover (GApuc) and with fitness function composed of circuit complexity, power, and signal delay. Parameterized uniform crossover is much more likely to distribute its disruptive trials in an unbiased manner over larger portions of the space, then it has more exploratory power than one and two-point crossover, so we have more chances of finding better solutions. Its effectiveness is shown by experiments. From the results, we can see that the best elite fitness, the average value of fitness of the correct circuits and the number of the correct circuits of GApuc are better than that of GA with one-point crossover or two-point crossover. The best case of optimal circuits generated by GApuc is 10.18% and 6.08% better in evaluating value than that by GA with one-point crossover and two-point crossover, respectively.

  5. Curvature correction of retinal OCTs using graph-based geometry detection

    NASA Astrophysics Data System (ADS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  6. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  7. Verifying the interactive convergence clock synchronization algorithm using the Boyer-Moore theorem prover

    NASA Technical Reports Server (NTRS)

    Young, William D.

    1992-01-01

    The application of formal methods to the analysis of computing systems promises to provide higher and higher levels of assurance as the sophistication of our tools and techniques increases. Improvements in tools and techniques come about as we pit the current state of the art against new and challenging problems. A promising area for the application of formal methods is in real-time and distributed computing. Some of the algorithms in this area are both subtle and important. In response to this challenge and as part of an ongoing attempt to verify an implementation of the Interactive Convergence Clock Synchronization Algorithm (ICCSA), we decided to undertake a proof of the correctness of the algorithm using the Boyer-Moore theorem prover. This paper describes our approach to proving the ICCSA using the Boyer-Moore prover.

  8. Current algorithmic solutions for peptide-based proteomics data generation and identification.

    PubMed

    Hoopmann, Michael R; Moritz, Robert L

    2013-02-01

    Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Self-calibrated multiple-echo acquisition with radial trajectories using the conjugate gradient method (SMART-CG).

    PubMed

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F

    2011-04-01

    To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.

  10. Self-calibrated Multiple-echo Acquisition with Radial Trajectories using the Conjugate Gradient Method (SMART-CG)

    PubMed Central

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F.

    2011-01-01

    Purpose To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Materials and Methods Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in-vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Results Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. Conclusion The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast 3D MRI data acquisition. PMID:21448967

  11. Uncertainty analysis of wavelet-based feature extraction for isotope identification on NaI gamma-ray spectra

    DOE PAGES

    Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao

    2017-03-02

    Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less

  12. Hybrid EEG--Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal.

    PubMed

    Mannan, Malik M Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M Ahmad

    2016-02-19

    Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.

  13. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  14. CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction

    PubMed Central

    Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.

    2012-01-01

    Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638

  15. Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence

    PubMed Central

    Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos

    2016-01-01

    When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865

  16. MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner

    PubMed Central

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415

  17. Development of the atmospheric correction algorithm for the next generation geostationary ocean color sensor data

    NASA Astrophysics Data System (ADS)

    Lee, Kwon-Ho; Kim, Wonkook

    2017-04-01

    The geostationary ocean color imager-II (GOCI-II), designed to be focused on the ocean environmental monitoring with better spatial (250m for local and 1km for full disk) and spectral resolution (13 bands) then the current operational mission of the GOCI-I. GOCI-II will be launched in 2018. This study presents currently developing algorithm for atmospheric correction and retrieval of surface reflectance over land to be optimized with the sensor's characteristics. We first derived the top-of-atmosphere radiances as the proxy data derived from the parameterized radiative transfer code in the 13 bands of GOCI-II. Based on the proxy data, the algorithm has been made with cloud masking, gas absorption correction, aerosol inversion, computation of aerosol extinction correction. The retrieved surface reflectances are evaluated by the MODIS level 2 surface reflectance products (MOD09). For the initial test period, the algorithm gave error of within 0.05 compared to MOD09. Further work will be progressed to fully implement the GOCI-II Ground Segment system (G2GS) algorithm development environment. These atmospherically corrected surface reflectance product will be the standard GOCI-II product after launch.

  18. Spectral binning for mitigation of polarization mode dispersion artifacts in catheter-based optical frequency domain imaging

    PubMed Central

    Villiger, Martin; Zhang, Ellen Ziyi; Nadkarni, Seemantini K.; Oh, Wang-Yuhl; Vakoc, Benjamin J.; Bouma, Brett E.

    2013-01-01

    Polarization mode dispersion (PMD) has been recognized as a significant barrier to sensitive and reproducible birefringence measurements with fiber-based, polarization-sensitive optical coherence tomography systems. Here, we present a signal processing strategy that reconstructs the local retardation robustly in the presence of system PMD. The algorithm uses a spectral binning approach to limit the detrimental impact of system PMD and benefits from the final averaging of the PMD-corrected retardation vectors of the spectral bins. The algorithm was validated with numerical simulations and experimental measurements of a rubber phantom. When applied to the imaging of human cadaveric coronary arteries, the algorithm was found to yield a substantial improvement in the reconstructed birefringence maps. PMID:23938487

  19. Processing Digital Imagery to Enhance Perceptions of Realism

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2003-01-01

    Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.

  20. Integration of an On-Axis General Sun-Tracking Formula in the Algorithm of an Open-Loop Sun-Tracking System

    PubMed Central

    Chong, Kok-Keong; Wong, Chee-Woon; Siaw, Fei-Lu; Yew, Tiong-Keat; Ng, See-Seng; Liang, Meng-Suan; Lim, Yun-Seng; Lau, Sing-Liong

    2009-01-01

    A novel on-axis general sun-tracking formula has been integrated in the algorithm of an open-loop sun-tracking system in order to track the sun accurately and cost effectively. Sun-tracking errors due to installation defects of the 25 m2 prototype solar concentrator have been analyzed from recorded solar images with the use of a CCD camera. With the recorded data, misaligned angles from ideal azimuth-elevation axes have been determined and corrected by a straightforward changing of the parameters' values in the general formula of the tracking algorithm to improve the tracking accuracy to 2.99 mrad, which falls below the encoder resolution limit of 4.13 mrad. PMID:22408483

  1. Machine Learning-based Intelligent Formal Reasoning and Proving System

    NASA Astrophysics Data System (ADS)

    Chen, Shengqing; Huang, Xiaojian; Fang, Jiaze; Liang, Jia

    2018-03-01

    The reasoning system can be used in many fields. How to improve reasoning efficiency is the core of the design of system. Through the formal description of formal proof and the regular matching algorithm, after introducing the machine learning algorithm, the system of intelligent formal reasoning and verification has high efficiency. The experimental results show that the system can verify the correctness of propositional logic reasoning and reuse the propositional logical reasoning results, so as to obtain the implicit knowledge in the knowledge base and provide the basic reasoning model for the construction of intelligent system.

  2. Lens correction algorithm based on the see-saw diagram to correct Seidel aberrations employing aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Rosete-Aguilar, Martha

    2000-06-01

    In this paper a lens correction algorithm based on the see- saw diagram developed by Burch is described. The see-saw diagram describes the image correction in rotationally symmetric systems over a finite field of view by means of aspherics surfaces. The algorithm is applied to the design of some basic telescopic configurations such as the classical Cassegrain telescope, the Dall-Kirkham telescope, the Pressman-Camichel telescope and the Ritchey-Chretien telescope in order to show a physically visualizable concept of image correction for optical systems that employ aspheric surfaces. By using the see-saw method the student can visualize the different possible configurations of such telescopes as well as their performances and also the student will be able to understand that it is not always possible to correct more primary aberrations by aspherizing more surfaces.

  3. Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter.

    PubMed

    Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo

    2017-02-01

    In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.

  4. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    PubMed Central

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  5. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  6. Energy shadowing correction of ultrasonic pulse-echo records by digital signal processing

    NASA Technical Reports Server (NTRS)

    Kishoni, D.; Heyman, J. S.

    1986-01-01

    Attention is given to a numerical algorithm that, via signal processing, enables the dynamic correction of the shadowing effect of reflections on ultrasonic displays. The algorithm was applied to experimental data from graphite-epoxy composite material immersed in a water bath. It is concluded that images of material defects with the shadowing corrections allow for a more quantitative interpretation of the material state. It is noted that the proposed algorithm is fast and simple enough to be adopted for real time applications in industry.

  7. IFACEwat: the interfacial water-implemented re-ranking algorithm to improve the discrimination of near native structures for protein rigid docking.

    PubMed

    Su, Chinh; Nguyen, Thuy-Diem; Zheng, Jie; Kwoh, Chee-Keong

    2014-01-01

    Protein-protein docking is an in silico method to predict the formation of protein complexes. Due to limited computational resources, the protein-protein docking approach has been developed under the assumption of rigid docking, in which one of the two protein partners remains rigid during the protein associations and water contribution is ignored or implicitly presented. Despite obtaining a number of acceptable complex predictions, it seems to-date that most initial rigid docking algorithms still find it difficult or even fail to discriminate successfully the correct predictions from the other incorrect or false positive ones. To improve the rigid docking results, re-ranking is one of the effective methods that help re-locate the correct predictions in top high ranks, discriminating them from the other incorrect ones. Our results showed that the IFACEwat increased both the numbers of the near-native structures and improved their ranks as compared to the initial rigid docking ZDOCK3.0.2. In fact, the IFACEwat achieved a success rate of 83.8% for Antigen/Antibody complexes, which is 10% better than ZDOCK3.0.2. As compared to another re-ranking technique ZRANK, the IFACEwat obtains success rates of 92.3% (8% better) and 90% (5% better) respectively for medium and difficult cases. When comparing with the latest published re-ranking method F2Dock, the IFACEwat performed equivalently well or even better for several Antigen/Antibody complexes. With the inclusion of interfacial water, the IFACEwat improves mostly results of the initial rigid docking, especially for Antigen/Antibody complexes. The improvement is achieved by explicitly taking into account the contribution of water during the protein interactions, which was ignored or not fully presented by the initial rigid docking and other re-ranking techniques. In addition, the IFACEwat maintains sufficient computational efficiency of the initial docking algorithm, yet improves the ranks as well as the number of the near native structures found. As our implementation so far targeted to improve the results of ZDOCK3.0.2, and particularly for the Antigen/Antibody complexes, it is expected in the near future that more implementations will be conducted to be applicable for other initial rigid docking algorithms.

  8. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  9. A study of ionospheric grid modification technique for BDS/GPS receiver

    NASA Astrophysics Data System (ADS)

    Liu, Xuelin; Li, Meina; Zhang, Lei

    2017-07-01

    For the single-frequency GPS receiver, ionospheric delay is an important factor affecting the positioning performance. There are many kinds of ionospheric correction methods, common models are Bent model, IRI model, Klobuchar model, Ne Quick model and so on. The US Global Positioning System (GPS) uses the Klobuchar coefficients transmitted in the satellite signal to correct the ionospheric delay error for a single frequency GPS receiver, but this model can only reduce the ionospheric error of about 50% in the mid-latitudes. In the Beidou system, the accuracy of the correction delay is higher. Therefore, this paper proposes a method that using BD grid information to correct GPS ionospheric delay to improve the ionospheric delay for the BDS/GPS compatible positioning receiver. In this paper, the principle of ionospheric grid algorithm is introduced in detail, and the positioning accuracy of GPS system and BDS/GPS compatible positioning system is compared and analyzed by the real measured data. The results show that the method can effectively improve the positioning accuracy of the receiver in a more concise way.

  10. Blind retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  11. Dynamic correction of the laser beam coordinate in fabrication of large-sized diffractive elements for testing aspherical mirrors

    NASA Astrophysics Data System (ADS)

    Shimansky, R. V.; Poleshchuk, A. G.; Korolkov, V. P.; Cherkashin, V. V.

    2017-05-01

    This paper presents a method of improving the accuracy of a circular laser system in fabrication of large-diameter diffractive optical elements by means of a polar coordinate system and the results of their use. An algorithm for correcting positioning errors of a circular laser writing system developed at the Institute of Automation and Electrometry, SB RAS, is proposed and tested. Highprecision synthesized holograms fabricated by this method and the results of using these elements for testing the 6.5 m diameter aspheric mirror of the James Webb space telescope (JWST) are described..

  12. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  13. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaparpalvi, R; Mynampati, D; Kuo, H

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performedmore » using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm may very well be warranted.« less

  14. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  15. Correction of aeroheating-induced intensity nonuniformity in infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu

    2016-05-01

    Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.

  16. Improvement of Nonlinearity Correction for BESIII ETOF Upgrade

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Cao, Ping; Ji, Xiaolu; Fan, Huanhuan; Dai, Hongliang; Zhang, Jie; Liu, Shubin; An, Qi

    2015-08-01

    An improved scheme to implement integral non-linearity (INL) correction of time measurements in the Beijing Spectrometer III Endcap Time-of-Flight (BESIII ETOF) upgrade system is presented in this paper. During upgrade, multi-gap resistive plate chambers (MRPC) are introduced as ETOF detectors which increases the total number of time measurement channels to 1728. The INL correction method adopted in BESIII TOF proved to be of limited use, because the sharply increased number of electronic channels required for reading out the detector strips degrade the system configuration efficiency severely. Furthermore, once installed into the spectrometer, BESIII TOF electronics do not support the TDCs' nonlinearity evaluation online. In this proposed method, INL data used for the correction algorithm are automatically imported from a non-volatile read-only memory (ROM) instead of from data acquisition software. This guarantees the real-time performance and system efficiency of the INL correction, especially for the ETOF upgrades with massive number of channels. Besides, a signal that is not synchronized to the system 41.65 MHz clock from BEPCII is sent to the frontend electronics (FEE) to simulate pseudo-random test pulses for the purpose of online nonlinearity evaluation. Test results show that the time measuring INL errors in one module with 72 channels can be corrected online and in real time.

  17. Geometric and shading correction for images of printed materials using boundary.

    PubMed

    Brown, Michael S; Tsoi, Yau-Chat

    2006-06-01

    A novel technique that uses boundary interpolation to correct geometric distortion and shading artifacts present in images of printed materials is presented. Unlike existing techniques, our algorithm can simultaneously correct a variety of geometric distortions, including skew, fold distortion, binder curl, and combinations of these. In addition, the same interpolation framework can be used to estimate the intrinsic illumination component of the distorted image to correct shading artifacts. We detail our algorithm for geometric and shading correction and demonstrate its usefulness on real-world and synthetic data.

  18. Optimization of the GSFC TROPOZ DIAL retrieval using synthetic lidar returns and ozonesondes - Part 1: Algorithm validation

    NASA Astrophysics Data System (ADS)

    Sullivan, J. T.; McGee, T. J.; Leblanc, T.; Sumnicht, G. K.; Twigg, L. W.

    2015-10-01

    The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL) is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical ozone concentrations and ozone layers aloft, especially during air quality episodes. For these reasons, this paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and confirm that it is properly representing ozone concentrations. This paper is focused on ensuring the TROPOZ algorithm is properly quantifying ozone concentrations, and a following paper will focus on a systematic uncertainty analysis. This methodology begins by simulating synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile. This was then systematically performed to identify any areas that need refinement for a new operational version of the TROPOZ retrieval algorithm. One immediate outcome of this exercise was that a bin registration error in the correction for detector saturation within the original retrieval was discovered and was subsequently corrected for. Another noticeable outcome was that the vertical smoothing in the retrieval algorithm was upgraded from a constant vertical resolution to a variable vertical resolution to yield a statistical uncertainty of <10 %. This new and optimized vertical-resolution scheme retains the ability to resolve fluctuations in the known ozone profile, but it now allows near-field signals to be more appropriately smoothed. With these revisions to the previous TROPOZ retrieval, the optimized TROPOZ retrieval algorithm (TROPOZopt) has been effective in retrieving nearly 200 m lower to the surface. Also, as compared to the previous version of the retrieval, the TROPOZopt had an overall mean improvement of 3.5 %, and large improvements (upwards of 10-15 % as compared to the previous algorithm) were apparent between 4.5 and 9 km. Finally, to ensure the TROPOZopt retrieval algorithm is robust enough to handle actual lidar return signals, a comparison is shown between four nearby ozonesonde measurements. The ozonesondes are mostly within the TROPOZopt retrieval uncertainty bars, which implies that this exercise was quite successful.

  19. Robust crop and weed segmentation under uncontrolled outdoor illumination.

    PubMed

    Jeon, Hong Y; Tian, Lei F; Zhu, Heping

    2011-01-01

    An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).

  20. Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation

    NASA Astrophysics Data System (ADS)

    Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.

    2017-12-01

    One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).

  1. Improvement of Forest Fire Detection Algorithm Using Brightness Temperature Lapse Rate Correction in HIMAWARI-8 IR Channels: Application to the 6 may 2017 Samcheok City, Korea

    NASA Astrophysics Data System (ADS)

    Park, S. H.; Park, W.; Jung, H. S.

    2018-04-01

    Forest fires are a major natural disaster that destroys a forest area and a natural environment. In order to minimize the damage caused by the forest fire, it is necessary to know the location and the time of day and continuous monitoring is required until fire is fully put out. We have tried to improve the forest fire detection algorithm by using a method to reduce the variability of surrounding pixels. We focused that forest areas of East Asia, part of the Himawari-8 AHI coverage, are mostly located in mountainous areas. The proposed method was applied to the forest fire detection in Samcheok city, Korea on May 6 to 10, 2017.

  2. Rejuvenation of a ten-year old AO curvature sensor: combining obsolescence correction and performance upgrade of MACAO

    NASA Astrophysics Data System (ADS)

    Haguenauer, P.; Fedrigo, E.; Pettazzi, L.; Reinero, C.; Gonte, F.; Pallanca, L.; Frahm, R.; Woillez, J.; Lilley, P.

    2016-07-01

    The MACAO curvature wavefront sensors have been designed as a generic adaptive optics sensor for the Very Large Telescope. Six systems have been manufactured and implemented on sky: four installed in the UTs Coudé train as an AO facility for the VLTI, and two in UT's instruments, SINFONI and CRIRES. The MACAO-VLTI have now been in use for scientific operation for more than a decade and are planned to be operated for at least ten more years. As second generation instruments for the VLTI were planned to start implementation in end of 2015, accompanied with a major upgrade of the VLTI infrastructure, we saw it as a good time for a rejuvenation project of these systems, correcting the obsolete components. This obsolescence correction also gave us the opportunity to implement improved capabilities: the correction frequency was pushed from 420 Hz to 1050 Hz, and an automatic vibrations compensation algorithm was added. The implementation on the first MACAO was done in October 2014 and the first phase of obsolescence correction was completed in all four MACAO-VLTI systems in October 2015 with the systems delivered back to operation. The resuming of the scientific operation of the VLTI on the UTs in November 2015 allowed to gather statistics in order to evaluate the improvement of the performances through this upgrade. A second phase of obsolescence correction has now been started, together with a global reflection on possible further improvements to secure observations with the VLTI.

  3. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    PubMed

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  4. A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.

    PubMed

    Ni, Qianwu; Chen, Lei

    2017-01-01

    Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. High-speed architecture for the decoding of trellis-coded modulation

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  6. Simulations in site error estimation for direction finders

    NASA Astrophysics Data System (ADS)

    López, Raúl E.; Passi, Ranjit M.

    1991-08-01

    The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.

  7. Hadronic energy resolution of a highly granular scintillator-steel hadron calorimeter using software compensation techniques

    NASA Astrophysics Data System (ADS)

    Adloff, C.; Blaha, J.; Blaising, J.-J.; Drancourt, C.; Espargilière, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S. T.; Sosebee, M.; White, A. P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N. K.; Goto, T.; Mavromanolakis, G.; Thomson, M. A.; Ward, D. R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G. C.; Dyshkant, A.; Lima, J. G. R.; Zutshi, V.; Hostachy, J.-Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Göttlicher, P.; Günter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.-I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.-Ch; Shen, W.; Stamen, R.; Tadday, A.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G. W.; Kawagoe, K.; Dauncey, P. D.; Magnan, A.-M.; Wing, M.; Salvatore, F.; Calvo Alamillo, E.; Fouz, M.-C.; Puerta-Pelayo, J.; Balagura, V.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Dolgoshein, B.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Smirnov, S.; Kiesling, C.; Pfau, S.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Bonis, J.; Bouquet, B.; Callier, S.; Cornebise, P.; Doublet, Ph; Dulucq, F.; Faucci Giannelli, M.; Fleury, J.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch; Pöschl, R.; Raux, L.; Seguin-Moreau, N.; Wicek, F.; Anduze, M.; Boudry, V.; Brient, J.-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Sauer, J.; Weber, S.; Zeitnitz, C.

    2012-09-01

    The energy resolution of a highly granular 1 m3 analogue scintillator-steel hadronic calorimeter is studied using charged pions with energies from 10 GeV to 80 GeV at the CERN SPS. The energy resolution for single hadrons is determined to be approximately 58%/√E/GeV. This resolution is improved to approximately 45%/√E/GeV with software compensation techniques. These techniques take advantage of the event-by-event information about the substructure of hadronic showers which is provided by the imaging capabilities of the calorimeter. The energy reconstruction is improved either with corrections based on the local energy density or by applying a single correction factor to the event energy sum derived from a global measure of the shower energy density. The application of the compensation algorithms to geant4 simulations yield resolution improvements comparable to those observed for real data.

  8. Segmentation of financial seals and its implementation on a DSP-based system

    NASA Astrophysics Data System (ADS)

    He, Jin; Liu, Tiegen; Guo, Jingjing; Zhang, Hao

    2009-11-01

    Automatic seal imprint identification is an important part of modern financial security. Accurate segmentation is the basis of correct identification. In this paper, a DSP (digital signal processor) based identification system was designed, and an adaptive algorithm was proposed to extract binary seal images from financial instruments. As the kernel of the identification system, a DSP chip of TMS320DM642 was used to implement image processing, controlling and coordinating works of each system module. The proposed algorithm consisted of three stages, including extraction of grayscale seal image, denoising and binarization. A grayscale seal image was extracted by color transform from a financial instrument image. Adaptive morphological operations were used to highlight details of the extracted grayscale seal image and smooth the background. After median filter for noise elimination, the filtered seal image was binarized by Otsu's method. The algorithm was developed based on the DSP development environment CCS and real-time operation system DSP/BIOS. To simplify the implementation of the proposed algorithm, the calibration of white balance and the coarse positioning of the seal imprint were implemented by TMS320DM642 controlling image acquisition. IMGLIB of TMS320DM642 was used for the efficiency improvement. The experiment result showed that financial seal imprints, even with intricate and dense strokes can be correctly segmented by the proposed algorithm. Adhesion and incompleteness distortions in the segmentation results were reduced, even when the original seal imprint had a poor quality.

  9. SU-F-T-444: Quality Improvement Review of Radiation Therapy Treatment Planning in the Presence of Dental Implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parenica, H; Ford, J; Mavroidis, P

    Purpose: To quantify and compare the effect of metallic dental implants (MDI) on dose distributions calculated using Collapsed Cone Convolution Superposition (CCCS) algorithm or a Monte Carlo algorithm (with and without correcting for the density of the MDI). Methods: Seven previously treated patients to the head and neck region were included in this study. The MDI and the streaking artifacts on the CT images were carefully contoured. For each patient a plan was optimized and calculated using the Pinnacle3 treatment planning system (TPS). For each patient two dose calculations were performed, a) with the densities of the MDI and CTmore » artifacts overridden (12 g/cc and 1 g/cc respectively) and b) without density overrides. The plans were then exported to the Monaco TPS and recalculated using Monte Carlo dose calculation algorithm. The changes in dose to PTVs and surrounding Regions of Interest (ROIs) were examined between all plans. Results: The Monte Carlo dose calculation indicated that PTVs received 6% lower dose than the CCCS algorithm predicted. In some cases, the Monte Carlo algorithm indicated that surrounding ROIs received higher dose (up to a factor of 2). Conclusion: Not properly accounting for dental implants can impact both the high dose regions (PTV) and the low dose regions (OAR). This study implies that if MDI and the artifacts are not appropriately contoured and given the correct density, there is potential significant impact on PTV coverage and OAR maximum doses.« less

  10. New approaches to removing cloud shadows and evaluating the 380 nm surface reflectance for improved aerosol optical thickness retrievals from the GOSAT/TANSO-Cloud and Aerosol Imager

    NASA Astrophysics Data System (ADS)

    Fukuda, Satoru; Nakajima, Teruyuki; Takenaka, Hideaki; Higurashi, Akiko; Kikuchi, Nobuyuki; Nakajima, Takashi Y.; Ishida, Haruma

    2013-12-01

    satellite aerosol retrieval algorithm was developed to utilize a near-ultraviolet band of the Greenhouse gases Observing SATellite/Thermal And Near infrared Sensor for carbon Observation (GOSAT/TANSO)-Cloud and Aerosol Imager (CAI). At near-ultraviolet wavelengths, the surface reflectance over land is smaller than that at visible wavelengths. Therefore, it is thought possible to reduce retrieval error by using the near-ultraviolet spectral region. In the present study, we first developed a cloud shadow detection algorithm that uses first and second minimum reflectances of 380 nm and 680 nm based on the difference in Rayleigh scattering contribution for these two bands. Then, we developed a new surface reflectance correction algorithm, the modified Kaufman method, which uses minimum reflectance data at 680 nm and the NDVI to estimate the surface reflectance at 380 nm. This algorithm was found to be particularly effective at reducing the aerosol effect remaining in the 380 nm minimum reflectance; this effect has previously proven difficult to remove owing to the infrequent sampling rate associated with the three-day recursion period of GOSAT and the narrow CAI swath of 1000 km. Finally, we applied these two algorithms to retrieve aerosol optical thicknesses over a land area. Our results exhibited better agreement with sun-sky radiometer observations than results obtained using a simple surface reflectance correction technique using minimum radiances.

  11. SeaWiFS Postlaunch Calibration and Validation Analyses

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine (Editor); McClain, Charles R.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Hsu, N. Christina; Patt, Frederick S.; Pietras, Christophe M.; Robinson, Wayne D.

    2000-01-01

    The effort to resolve data quality issues and improve on the initial data evaluation methodologies of the SeaWiFS Project was an extensive one. These evaluations have resulted, to date, in three major reprocessings of the entire data set where each reprocessing addressed the data quality issues that could be identified up to the time of the reprocessing. Three volumes of the SeaWiFS Postlaunch Technical Report Series (Volumes 9, 10, and 11) are needed to document the improvements implemented since launch. Volume 10 continues the sequential presentation of postlaunch data analysis and algorithm descriptions begun in Volume 9. Chapter 1 of Volume 10 describes an absorbing aerosol index, similar to that produced by the Total Ozone Mapping Spectrometer (TOMS) Project, which is used to flag pixels contaminated by absorbing aerosols, such as, dust and smoke. Chapter 2 discusses the algorithm being used to remove SeaWiFS out-of-band radiance from the water-leaving radiances. Chapter 3 provides an itemization of all significant changes in the processing algorithms for each of the first three reprocessings. Chapter 4 shows the time series of global clear water and deep-water (depths greater than 1,000m) bio-optical and atmospheric properties (normalized water-leaving radiances, chlorophyll, atmospheric optical depth, etc.) based on the eight-day composites as a check on the sensor calibration stability. Chapter 5 examines the variation in the derived products with scan angle using high resolution data around Hawaii to test for residual scan modulation effects and atmospheric correction biases. Chapter 6 provides a methodology for evaluating the atmospheric correction algorithm and atmospheric derived products using ground-based observations. Similarly, Chapter 7 presents match-up comparisons of coincident satellite and in situ data to determine the accuracy of the water-leaving radiances, chlorophyll a, and K(490) products.

  12. Apparent resistivity for transient electromagnetic induction logging and its correction in radial layer identification

    NASA Astrophysics Data System (ADS)

    Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei

    2018-04-01

    We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.

  13. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  14. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets

    PubMed Central

    Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.

    2013-01-01

    New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440

  15. IFACEwat: the interfacial water-implemented re-ranking algorithm to improve the discrimination of near native structures for protein rigid docking

    PubMed Central

    2014-01-01

    Background Protein-protein docking is an in silico method to predict the formation of protein complexes. Due to limited computational resources, the protein-protein docking approach has been developed under the assumption of rigid docking, in which one of the two protein partners remains rigid during the protein associations and water contribution is ignored or implicitly presented. Despite obtaining a number of acceptable complex predictions, it seems to-date that most initial rigid docking algorithms still find it difficult or even fail to discriminate successfully the correct predictions from the other incorrect or false positive ones. To improve the rigid docking results, re-ranking is one of the effective methods that help re-locate the correct predictions in top high ranks, discriminating them from the other incorrect ones. In this paper, we propose a new re-ranking technique using a new energy-based scoring function, namely IFACEwat - a combined Interface Atomic Contact Energy (IFACE) and water effect. The IFACEwat aims to further improve the discrimination of the near-native structures of the initial rigid docking algorithm ZDOCK3.0.2. Unlike other re-ranking techniques, the IFACEwat explicitly implements interfacial water into the protein interfaces to account for the water-mediated contacts during the protein interactions. Results Our results showed that the IFACEwat increased both the numbers of the near-native structures and improved their ranks as compared to the initial rigid docking ZDOCK3.0.2. In fact, the IFACEwat achieved a success rate of 83.8% for Antigen/Antibody complexes, which is 10% better than ZDOCK3.0.2. As compared to another re-ranking technique ZRANK, the IFACEwat obtains success rates of 92.3% (8% better) and 90% (5% better) respectively for medium and difficult cases. When comparing with the latest published re-ranking method F2Dock, the IFACEwat performed equivalently well or even better for several Antigen/Antibody complexes. Conclusions With the inclusion of interfacial water, the IFACEwat improves mostly results of the initial rigid docking, especially for Antigen/Antibody complexes. The improvement is achieved by explicitly taking into account the contribution of water during the protein interactions, which was ignored or not fully presented by the initial rigid docking and other re-ranking techniques. In addition, the IFACEwat maintains sufficient computational efficiency of the initial docking algorithm, yet improves the ranks as well as the number of the near native structures found. As our implementation so far targeted to improve the results of ZDOCK3.0.2, and particularly for the Antigen/Antibody complexes, it is expected in the near future that more implementations will be conducted to be applicable for other initial rigid docking algorithms. PMID:25521441

  16. An improved algorithm for de-striping of ocean colour monitor imageries aided by measured sensor characteristics

    NASA Astrophysics Data System (ADS)

    Dutt, Ashutosh; Mishra, Ashish; Goswami, D. R.; Kumar, A. S. Kiran

    2016-05-01

    The push-broom sensors in bands meant to study oceans, in general suffer from residual non uniformity even after radiometric correction. The in-orbit data from OCM-2 shows pronounced striping in lower bands. There have been many attempts and different approaches to solve the problem using image data itself. The success or lack of it of each algorithm lies on the quality of the uniform region identified. In this paper, an image based destriping algorithm is presented with constraints being derived from Ground Calibration exercise. The basis of the methodology is determination of pixel to pixel non-uniformity through uniform segments identified and collected from large number of images, covering the dynamic range of the sensor. The results show the effectiveness of the algorithm over different targets. The performance is qualitatively evaluated by visual inspection and quantitatively measured by two parameters.

  17. K-means-clustering-based fiber nonlinearity equalization techniques for 64-QAM coherent optical communication system.

    PubMed

    Zhang, Junfeng; Chen, Wei; Gao, Mingyi; Shen, Gangxiang

    2017-10-30

    In this work, we proposed two k-means-clustering-based algorithms to mitigate the fiber nonlinearity for 64-quadrature amplitude modulation (64-QAM) signal, the training-sequence assisted k-means algorithm and the blind k-means algorithm. We experimentally demonstrated the proposed k-means-clustering-based fiber nonlinearity mitigation techniques in 75-Gb/s 64-QAM coherent optical communication system. The proposed algorithms have reduced clustering complexity and low data redundancy and they are able to quickly find appropriate initial centroids and select correctly the centroids of the clusters to obtain the global optimal solutions for large k value. We measured the bit-error-ratio (BER) performance of 64-QAM signal with different launched powers into the 50-km single mode fiber and the proposed techniques can greatly mitigate the signal impairments caused by the amplified spontaneous emission noise and the fiber Kerr nonlinearity and improve the BER performance.

  18. Scene-based nonuniformity correction technique that exploits knowledge of the focal-plane array readout architecture.

    PubMed

    Narayanan, Balaji; Hardie, Russell C; Muse, Robert A

    2005-06-10

    Spatial fixed-pattern noise is a common and major problem in modern infrared imagers owing to the nonuniform response of the photodiodes in the focal plane array of the imaging system. In addition, the nonuniform response of the readout and digitization electronics, which are involved in multiplexing the signals from the photodiodes, causes further nonuniformity. We describe a novel scene based on a nonuniformity correction algorithm that treats the aggregate nonuniformity in separate stages. First, the nonuniformity from the readout amplifiers is corrected by use of knowledge of the readout architecture of the imaging system. Second, the nonuniformity resulting from the individual detectors is corrected with a nonlinear filter-based method. We demonstrate the performance of the proposed algorithm by applying it to simulated imagery and real infrared data. Quantitative results in terms of the mean absolute error and the signal-to-noise ratio are also presented to demonstrate the efficacy of the proposed algorithm. One advantage of the proposed algorithm is that it requires only a few frames to obtain high-quality corrections.

  19. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    PubMed

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  20. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets

    PubMed Central

    Bhikha, Charita; Andreasen, Arne; Christensen, Erik I.; Letts, Robyn F. R.; Pantanowitz, Adam; Rubin, David M.; Thomsen, Jesper S.; Zhai, Xiao-Yue

    2015-01-01

    An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron. PMID:26170896

  1. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    DOE PAGES

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  2. A Damping Grid Strapdown Inertial Navigation System Based on a Kalman Filter for Ships in Polar Regions.

    PubMed

    Huang, Weiquan; Fang, Tao; Luo, Li; Zhao, Lin; Che, Fengzhu

    2017-07-03

    The grid strapdown inertial navigation system (SINS) used in polar navigation also includes three kinds of periodic oscillation errors as common SINS are based on a geographic coordinate system. Aiming ships which have the external information to conduct a system reset regularly, suppressing the Schuler periodic oscillation is an effective way to enhance navigation accuracy. The Kalman filter based on the grid SINS error model which applies to the ship is established in this paper. The errors of grid-level attitude angles can be accurately estimated when the external velocity contains constant error, and then correcting the errors of the grid-level attitude angles through feedback correction can effectively dampen the Schuler periodic oscillation. The simulation results show that with the aid of external reference velocity, the proposed external level damping algorithm based on the Kalman filter can suppress the Schuler periodic oscillation effectively. Compared with the traditional external level damping algorithm based on the damping network, the algorithm proposed in this paper can reduce the overshoot errors when the state of grid SINS is switched from the non-damping state to the damping state, and this effectively improves the navigation accuracy of the system.

  3. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets.

    PubMed

    Bhikha, Charita; Andreasen, Arne; Christensen, Erik I; Letts, Robyn F R; Pantanowitz, Adam; Rubin, David M; Thomsen, Jesper S; Zhai, Xiao-Yue

    2015-01-01

    An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  4. An adaptive optics approach for laser beam correction in turbulence utilizing a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2015-09-01

    Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n <10-13 m- 2/3). An intelligent correction algorithm can then be developed to reconstruct the perturbed wavefront and use this information to drive a deformable mirror capable of correcting the major distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.

  5. Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana

    1989-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  6. Algorithm for atmospheric corrections of aircraft and satellite imagery

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Ferrare, R. A.; Kaufman, Y. J.; Markham, B. L.; Mattoo, S.

    1992-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 microns. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  7. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  8. Calibration free beam hardening correction for cardiac CT perfusion imaging

    NASA Astrophysics Data System (ADS)

    Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.

  9. Automatic classification of the sub-techniques (gears) used in cross-country ski skating employing a mobile phone.

    PubMed

    Stöggl, Thomas; Holst, Anders; Jonasson, Arndt; Andersson, Erik; Wunsch, Tobias; Norström, Christer; Holmberg, Hans-Christer

    2014-10-31

    The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC) ski-skating gears (G) using Smartphone accelerometer data. Eleven XC skiers (seven men, four women) with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right) and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest) and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01) with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear.

  10. Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range.

    PubMed

    He, Chenlong; Feng, Zuren; Ren, Zhigang

    2018-02-03

    For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham's Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm.

  11. Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range

    PubMed Central

    Feng, Zuren; Ren, Zhigang

    2018-01-01

    For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham’s Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm. PMID:29401649

  12. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE PAGES

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-12

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less

  13. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less

  14. Unmasking Upstream Gene Expression Regulators with miRNA-corrected mRNA Data

    PubMed Central

    Bollmann, Stephanie; Bu, Dengpan; Wang, Jiaqi; Bionaz, Massimo

    2015-01-01

    Expressed micro-RNA (miRNA) affects messenger RNA (mRNA) abundance, hindering the accuracy of upstream regulator analysis. Our objective was to provide an algorithm to correct such bias. Large mRNA and miRNA analyses were performed on RNA extracted from bovine liver and mammary tissue. Using four levels of target scores from TargetScan (all miRNA:mRNA target gene pairs or only the top 25%, 50%, or 75%). Using four levels of target scores from TargetScan (all miRNA:mRNA target gene pairs or only the top 25%, 50%, or 75%) and four levels of the magnitude of miRNA effect (ME) on mRNA expression (30%, 50%, 75%, and 83% mRNA reduction), we generated 17 different datasets (including the original dataset). For each dataset, we performed upstream regulator analysis using two bioinformatics tools. We detected an increased effect on the upstream regulator analysis with larger miRNA:mRNA pair bins and higher ME. The miRNA correction allowed identification of several upstream regulators not present in the analysis of the original dataset. Thus, the proposed algorithm improved the prediction of upstream regulators. PMID:27279737

  15. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  16. Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.

    PubMed

    Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B

    2011-02-01

    This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.

  17. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  18. Quantification of regional myocardial blood flow estimation with three-dimensional dynamic rubidium-82 PET and modified spillover correction model.

    PubMed

    Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara

    2012-08-01

    Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.

  19. An improved interatomic potential for xenon in UO2: a combined density functional theory/genetic algorithm approach.

    PubMed

    Thompson, Alexander E; Meredig, Bryce; Wolverton, C

    2014-03-12

    We have created an improved xenon interatomic potential for use with existing UO2 potentials. This potential was fit to density functional theory calculations with the Hubbard U correction (DFT + U) using a genetic algorithm approach called iterative potential refinement (IPR). We examine the defect energetics of the IPR-fitted xenon interatomic potential as well as other, previously published xenon potentials. We compare these potentials to DFT + U derived energetics for a series of xenon defects in a variety of incorporation sites (large, intermediate, and small vacant sites). We find the existing xenon potentials overestimate the energy needed to add a xenon atom to a wide set of defect sites representing a range of incorporation sites, including failing to correctly rank the energetics of the small incorporation site defects (xenon in an interstitial and xenon in a uranium site neighboring uranium in an interstitial). These failures are due to problematic descriptions of Xe-O and/or Xe-U interactions of the previous xenon potentials. These failures are corrected by our newly created xenon potential: our IPR-generated potential gives good agreement with DFT + U calculations to which it was not fitted, such as xenon in an interstitial (small incorporation site) and xenon in a double Schottky defect cluster (large incorporation site). Finally, we note that IPR is very flexible and can be applied to a wide variety of potential forms and materials systems, including metals and EAM potentials.

  20. Protein docking prediction using predicted protein-protein interface.

    PubMed

    Li, Bin; Kihara, Daisuke

    2012-01-10

    Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  1. Going Vertical To Improve the Accuracy of Atomic Force Microscopy Based Single-Molecule Force Spectroscopy.

    PubMed

    Walder, Robert; Van Patten, William J; Adhikari, Ayush; Perkins, Thomas T

    2018-01-23

    Single-molecule force spectroscopy (SMFS) is a powerful technique to characterize the energy landscape of individual proteins, the mechanical properties of nucleic acids, and the strength of receptor-ligand interactions. Atomic force microscopy (AFM)-based SMFS benefits from ongoing progress in improving the precision and stability of cantilevers and the AFM itself. Underappreciated is that the accuracy of such AFM studies remains hindered by inadvertently stretching molecules at an angle while measuring only the vertical component of the force and extension, degrading both measurements. This inaccuracy is particularly problematic in AFM studies using double-stranded DNA and RNA due to their large persistence length (p ≈ 50 nm), often limiting such studies to other SMFS platforms (e.g., custom-built optical and magnetic tweezers). Here, we developed an automated algorithm that aligns the AFM tip above the DNA's attachment point to a coverslip. Importantly, this algorithm was performed at low force (10-20 pN) and relatively fast (15-25 s), preserving the connection between the tip and the target molecule. Our data revealed large uncorrected lateral offsets for 100 and 650 nm DNA molecules [24 ± 18 nm (mean ± standard deviation) and 180 ± 110 nm, respectively]. Correcting this offset yielded a 3-fold improvement in accuracy and precision when characterizing DNA's overstretching transition. We also demonstrated high throughput by acquiring 88 geometrically corrected force-extension curves of a single individual 100 nm DNA molecule in ∼40 min and versatility by aligning polyprotein- and PEG-based protein-ligand assays. Importantly, our software-based algorithm was implemented on a commercial AFM, so it can be broadly adopted. More generally, this work illustrates how to enhance AFM-based SMFS by developing more sophisticated data-acquisition protocols.

  2. An adaptive tensor voting algorithm combined with texture spectrum

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  3. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  4. A First Order Wavefront Estimation Algorithm for P1640 Calibrator

    NASA Technical Reports Server (NTRS)

    Zhaia, C.; Vasisht, G.; Shao, M.; Lockhart, T.; Cady, E.; Oppenheimer, B.; Burruss, R.; Roberts, J.; Beichman, C.; Brenner, D.; hide

    2012-01-01

    P1640 calibrator is a wavefront sensor working with the P1640 coronagraph and the Palomar 3000 actuator adaptive optics system (P3K) at the Palomar 200 inch Hale telescope. It measures the wavefront by interfering post-coronagraph light with a reference beam formed by low-pass filtering the blocked light from the coronagraph focal plane mask. The P1640 instrument has a similar architecture to the Gemini Planet Imager (GPI) and its performance is currently limited by the quasi-static speckles due to non-common path wavefront errors, which comes from the non-common path for the light to arrive at the AO wavefront sensor and the coronagraph mask. By measuring the wavefront after the coronagraph mask, the non-common path wavefront error can be estimated and corrected by feeding back the error signal to the deformable mirror (DM) of the P3K AO system. Here, we present a first order wavefront estimation algorithm and an instrument calibration scheme used in experiments done recently at Palomar observatory. We calibrate the P1640 calibrator by measuring its responses to poking DM actuators with a sparse checkerboard pattern at different amplitudes. The calibration yields a complex normalization factor for wavefront estimation and establishes the registration of the DM actuators at the pupil camera of the P1640 calibrator, necessary for wavefront correction. Improvement of imaging quality after feeding back the wavefront correction to the AO system demonstrated the efficacy of the algorithm.

  5. Design Document for Differential GPS Ground Reference Station Pseudorange Correction Generation Algorithm

    DOT National Transportation Integrated Search

    1986-12-01

    The algorithms described in this report determine the differential corrections to be broadcast to users of the Global Positioning System (GPS) who require higher accuracy navigation or position information than the 30 to 100 meters that GPS normally ...

  6. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  7. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    PubMed

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p < 0.001) and for the overall image quality (mean, 3.8; range, 3.3-4.0; p < 0.001). The most preferred anatomical regions were the azygoesophageal recess, thoracic spine, and unobscured lung. The visibility of chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  8. IsoCor: correcting MS data in isotope labeling experiments.

    PubMed

    Millard, Pierre; Letisse, Fabien; Sokol, Serguei; Portais, Jean-Charles

    2012-05-01

    Mass spectrometry (MS) is widely used for isotopic labeling studies of metabolism and other biological processes. Quantitative applications-e.g. metabolic flux analysis-require tools to correct the raw MS data for the contribution of all naturally abundant isotopes. IsoCor is a software that allows such correction to be applied to any chemical species. Hence it can be used to exploit any isotopic tracer, from well-known ((13)C, (15)N, (18)O, etc) to unusual ((57)Fe, (77)Se, etc) isotopes. It also provides new features-e.g. correction for the isotopic purity of the tracer-to improve the accuracy of quantitative isotopic studies, and implements an efficient algorithm to process large datasets. Its user-friendly interface makes isotope labeling experiments more accessible to a wider biological community. IsoCor is distributed under OpenSource license at http://metasys.insa-toulouse.fr/software/isocor/

  9. Ion trace detection algorithm to extract pure ion chromatograms to improve untargeted peak detection quality for liquid chromatography/time-of-flight mass spectrometry-based metabolomics data.

    PubMed

    Wang, San-Yuan; Kuo, Ching-Hua; Tseng, Yufeng J

    2015-03-03

    Able to detect known and unknown metabolites, untargeted metabolomics has shown great potential in identifying novel biomarkers. However, elucidating all possible liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) ion signals in a complex biological sample remains challenging since many ions are not the products of metabolites. Methods of reducing ions not related to metabolites or simply directly detecting metabolite related (pure) ions are important. In this work, we describe PITracer, a novel algorithm that accurately detects the pure ions of a LC/TOF-MS profile to extract pure ion chromatograms and detect chromatographic peaks. PITracer estimates the relative mass difference tolerance of ions and calibrates the mass over charge (m/z) values for peak detection algorithms with an additional option to further mass correction with respect to a user-specified metabolite. PITracer was evaluated using two data sets containing 373 human metabolite standards, including 5 saturated standards considered to be split peaks resultant from huge m/z fluctuation, and 12 urine samples spiked with 50 forensic drugs of varying concentrations. Analysis of these data sets show that PITracer correctly outperformed existing state-of-art algorithm and extracted the pure ion chromatograms of the 5 saturated standards without generating split peaks and detected the forensic drugs with high recall, precision, and F-score and small mass error.

  10. Design of the OMPS limb sensor correction algorithm

    NASA Astrophysics Data System (ADS)

    Jaross, Glen; McPeters, Richard; Seftor, Colin; Kowitt, Mark

    The Sensor Data Records (SDR) for the Ozone Mapping and Profiler Suite (OMPS) on NPOESS (National Polar-orbiting Operational Environmental Satellite System) contains geolocated and calibrated radiances, and are similar to the Level 1 data of NASA Earth Observing System and other programs. The SDR algorithms (one for each of the 3 OMPS focal planes) are the processes by which the Raw Data Records (RDR) from the OMPS sensors are converted into the records that contain all data necessary for ozone retrievals. Consequently, the algorithms must correct and calibrate Earth signals, geolocate the data, and identify and ingest collocated ancillary data. As with other limb sensors, ozone profile retrievals are relatively insensitive to calibration errors due to the use of altitude normalization and wavelength pairing. But the profile retrievals as they pertain to OMPS are not immune from sensor changes. In particular, the OMPS Limb sensor images an altitude range of > 100 km and a spectral range of 290-1000 nm on its detector. Uncorrected sensor degradation and spectral registration drifts can lead to changes in the measured radiance profile, which in turn affects the ozone trend measurement. Since OMPS is intended for long-term monitoring, sensor calibration is a specific concern. The calibration is maintained via the ground data processing. This means that all sensor calibration data, including direct solar measurements, are brought down in the raw data and processed separately by the SDR algorithms. One of the sensor corrections performed by the algorithm is the correction for stray light. The imaging spectrometer and the unique focal plane design of OMPS makes these corrections particularly challenging and important. Following an overview of the algorithm flow, we will briefly describe the sensor stray light characterization and the correction approach used in the code.

  11. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws.

    PubMed

    Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman

    2014-12-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra- and postoperative follow-up imaging.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paudel, M; currently at University of Toronto, Sunnybrook Health Sciences Center, Toronto, ON; MacKenzie, M

    Purpose: To evaluate the metal artifacts in diagnostic kVCT images of patients that are corrected using a normalized metal artifact reduction method with MVCT prior images, MVCT-NMAR. Methods: An MVCTNMAR algorithm was developed and applied to five patients: three with bilateral hip prostheses, one with unilateral hip prosthesis and one with dental fillings. The corrected images were evaluated for visualization of tissue structures and their interfaces, and for radiotherapy dose calculations. They were also compared against the corresponding images corrected by a commercial metal artifact reduction technique, O-MAR, on a Phillips™ CT scanner. Results: The use of MVCT images formore » correcting kVCT images in the MVCT-NMAR technique greatly reduces metal artifacts, avoids secondary artifacts, and makes patient images more useful for correct dose calculation in radiotherapy. These improvements are significant over the commercial correction method, provided the MVCT and kVCT images are correctly registered. The remaining and the secondary artifacts (soft tissue blurring, eroded bones, false bones or air pockets, CT number cupping within the metal) present in O-MAR corrected images are removed in the MVCT-NMAR corrected images. Large dose reduction is possible outside the planning target volume (e.g., 59.2 Gy in comparison to 52.5 Gy in pubic bone) when these MVCT-NMAR corrected images are used in TomoTherapy™ treatment plans, as the corrected images no longer require directional blocks for prostate plans in order to avoid the image artifact regions. Conclusion: The use of MVCT-NMAR corrected images in radiotherapy treatment planning could improve the treatment plan quality for cancer patients with metallic implants. Moti Raj Paudel is supported by the Vanier Canada Graduate Scholarship, the Endowed Graduate Scholarship in Oncology and the Dissertation Fellowship at the University of Alberta. The authors acknowledge the CIHR operating grant number MOP 53254.« less

  13. [State Recognition of Solid Fermentation Process Based on Near Infrared Spectroscopy with Adaboost and Spectral Regression Discriminant Analysis].

    PubMed

    Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui

    2016-01-01

    In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct recognition rate of the Adaboost-SRDA-NN model achieved 100% in the validation set. The overall results demonstrate that SRDA algorithm can effectively achieve the spectral feature information extraction to the spectral dimension reduction in model calibration process of qualitative analysis of NIR spectroscopy. In addition, the Adaboost lifting algorithm can improve the classification accuracy of the final model. The results obtained in this work can provide research foundation for developing online monitoring instruments for the monitoring of SSF process.

  14. An improved algorithm for wildfire detection

    NASA Astrophysics Data System (ADS)

    Nakau, K.

    2010-12-01

    Satellite information of wild fire location has strong demands from society. Therefore, Understanding such demands is quite important to consider what to improve the wild fire detection algorithm. Interviews and considerations imply that the most important improvements are geographical resolution of the wildfire product and classification of fire; smoldering or flaming. Discussion with fire service agencies are performed with fire service agencies in Alaska and fire service volunteer groups in Indonesia. Alaska Fire Service (AFS) makes 3D-map overlaid by fire location every morning. Then, this 3D-map is examined by leaders of fire service teams to decide their strategy to fighting against wild fire. Especially, firefighters of both agencies seek the best walk path to approach the fire. Because of mountainous landscape, geospatial resolution is quite important for them. For example, walking in bush for 1km, as same as one pixel of fire product, is very tough for firefighters. Also, in case of remote wild fire, fire service agencies utilize satellite information to decide when to have a flight observation to confirm the status; expanding, flaming, smoldering or out. Therefore, it is also quite important to provide the classification of fire; flaming or smoldering. Not only the aspect of disaster management, wildfire emits huge amount of carbon into atmosphere as much as one quarter to one half of CO2 by fuel combustion (IPCC AR4). Reduction of the CO2 emission by human caused wildfire is important. To estimate carbon emission from wildfire, special resolution is quite important. To improve sensitivity of wild fire detection, author adopts radiance based wildfire detection. Different from the existing brightness temperature approach, we can easily consider reflectance of background land coverage. Especially for GCOM-C1/SGLI, band to detect fire with 250m resolution is 1.6μm wavelength. In this band, we have much more sunlight reflection. Therefore, we need to consider the way to cancel sunlight reflection. In this study, author utilizes simple linear correction for estimation of infrared emission considering sunlight reflection. As well as bran new core part of wildfire algorithm, we need to eliminate bright reflectance matters, including cloud, desert and sun glint. Also, we need to eliminate the false alarms at coastal area for difference of surface temperature between land and ocean. An existing algorithm MOD14 has same procedure, however, some of these ancillary parts are newly introduced or improved. Snow mask is newly introduced to reduce a bright reflectance with snow and ice covered area. Also, the improved ancillary parts include candidate selection of fire pixel, cloud mask, water body mask. With these improvements, wildfire with dense smoke or wildfire under thin cloud could be detected by this algorithm. This wild fire product is not validated by ground observations yet. However, distribution is well corresponded with wildfire location in same periods. Unfortunately, this algorithm also detects false alarm in urban area same as existing one. This should be corrected adopting other bands. Current algorithm will be performed in JASMES website.

  15. Optical Sensors for Planetary Radiant Energy (OSPREy): Calibration and Validation of Current and Next-Generation NASA Missions

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B.; Bernhard, Germar; Morrow, John H.; Booth, Charles R.; Comer, Thomas; Lind, Randall N.; Quang, Vi

    2012-01-01

    A principal objective of the Optical Sensors for Planetary Radiance Energy (OSPREy) activity is to establish an above-water radiometer system as a lower-cost alternative to existing in-water systems for the collection of ground-truth observations. The goal is to be able to make high-quality measurements satisfying the accuracy requirements for the vicarious calibration and algorithm validation of next-generation satellites that make ocean color and atmospheric measurements. This means the measurements will have a documented uncertainty satisfying the established performance metrics for producing climate-quality data records. The OSPREy approach is based on enhancing commercial-off-the-shelf fixed-wavelength and hyperspectral sensors to create hybridspectral instruments with an improved accuracy and spectral resolution, as well as a dynamic range permitting sea, Sun, sky, and Moon observations. Greater spectral diversity in the ultraviolet (UV) will be exploited to separate the living and nonliving components of marine ecosystems; UV bands will also be used to flag and improve atmospheric correction algorithms in the presence of absorbing aerosols. The short-wave infrared (SWIR) is expected to improve atmospheric correction, because the ocean is radiometrically blacker at these wavelengths. This report describes the development of the sensors, including unique capabilities like three-axis polarimetry; the documented uncertainty will be presented in a subsequent report.

  16. Postprocessing for Air Quality Predictions

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.

    2017-12-01

    In recent year, air quality (AQ) forecasting has made significant progress towards better predictions with the goal of protecting the public from harmful pollutants. This progress is the results of improvements in weather and chemical transport models, their coupling, and more accurate emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ predictions are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the result of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ predictions. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ predictions, and improve the correlation between predictions and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic predictions, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.

  17. MRI-Based Nonrigid Motion Correction in Simultaneous PET/MRI

    PubMed Central

    Chun, Se Young; Reese, Timothy G.; Ouyang, Jinsong; Guerin, Bastien; Catana, Ciprian; Zhu, Xuping; Alpert, Nathaniel M.; El Fakhri, Georges

    2014-01-01

    Respiratory and cardiac motion is the most serious limitation to whole-body PET, resulting in spatial resolution close to 1 cm. Furthermore, motion-induced inconsistencies in the attenuation measurements often lead to significant artifacts in the reconstructed images. Gating can remove motion artifacts at the cost of increased noise. This paper presents an approach to respiratory motion correction using simultaneous PET/MRI to demonstrate initial results in phantoms, rabbits, and nonhuman primates and discusses the prospects for clinical application. Methods Studies with a deformable phantom, a free-breathing primate, and rabbits implanted with radioactive beads were performed with simultaneous PET/MRI. Motion fields were estimated from concurrently acquired tagged MR images using 2 B-spline nonrigid image registration methods and incorporated into a PET list-mode ordered-subsets expectation maximization algorithm. Using the measured motion fields to transform both the emission data and the attenuation data, we could use all the coincidence data to reconstruct any phase of the respiratory cycle. We compared the resulting SNR and the channelized Hotelling observer (CHO) detection signal-to-noise ratio (SNR) in the motion-corrected reconstruction with the results obtained from standard gating and uncorrected studies. Results Motion correction virtually eliminated motion blur without reducing SNR, yielding images with SNR comparable to those obtained by gating with 5–8 times longer acquisitions in all studies. The CHO study in dynamic phantoms demonstrated a significant improvement (166%–276%) in lesion detection SNR with MRI-based motion correction as compared with gating (P < 0.001). This improvement was 43%–92% for large motion compared with lesion detection without motion correction (P < 0.001). CHO SNR in the rabbit studies confirmed these results. Conclusion Tagged MRI motion correction in simultaneous PET/MRI significantly improves lesion detection compared with respiratory gating and no motion correction while reducing radiation dose. In vivo primate and rabbit studies confirmed the improvement in PET image quality and provide the rationale for evaluation in simultaneous whole-body PET/MRI clinical studies. PMID:22743250

  18. Imager for Mars Pathfinder (IMP) image calibration

    USGS Publications Warehouse

    Reid, R.J.; Smith, P.H.; Lemmon, M.; Tanner, R.; Burkland, M.; Wegryn, E.; Weinberg, J.; Marcialis, R.; Britt, D.T.; Thomas, N.; Kramm, R.; Dummel, A.; Crowe, D.; Bos, B.J.; Bell, J.F.; Rueffer, P.; Gliem, F.; Johnson, J. R.; Maki, J.N.; Herkenhoff, K. E.; Singer, Robert B.

    1999-01-01

    The Imager for Mars Pathfinder returned over 16,000 high-quality images from the surface of Mars. The camera was well-calibrated in the laboratory, with <5% radiometric uncertainty. The photometric properties of two radiometric targets were also measured with 3% uncertainty. Several data sets acquired during the cruise and on Mars confirm that the system operated nominally throughout the course of the mission. Image calibration algorithms were developed for landed operations to correct instrumental sources of noise and to calibrate images relative to observations of the radiometric targets. The uncertainties associated with these algorithms as well as current improvements to image calibration are discussed. Copyright 1999 by the American Geophysical Union.

  19. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

  20. Reduced Kalman Filters for Clock Ensembles

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles A.

    2011-01-01

    This paper summarizes the author's work ontimescales based on Kalman filters that act upon the clock comparisons. The natural Kalman timescale algorithm tends to optimize long-term timescale stability at the expense of short-term stability. By subjecting each post-measurement error covariance matrix to a non-transparent reduction operation, one obtains corrected clocks with improved short-term stability and little sacrifice of long-term stability.

Top